SUMMARY
This R code is used to estimate the relationship between oxygen
consumption (MO₂) and ambient oxygen partial pressure (PO₂) in the
Common Galaxias (Galaxias maculatus). It also estimates the
critical partial pressure of oxygen for aerobic metabolism (Pcrit),
which is commonly understood as the threshold below which the oxygen
consumption rate can no longer be sustained. The associated article is
“The role of osmorespiratory compromise in hypoxia tolerance of the
purportedly oxyconforming teleost Galaxias maculatus.” If you
click the “Code” button in the top right of the HTML document, you can
download the Rmd file.
AIM
The article aims to test whether Galaxias maculatus can maintain oxygen
consumption (MO₂) as ambient PO₂ falls and, if so, at what level it
reaches the critical partial pressure of oxygen for aerobic metabolism
(Pcrit).
AUTHORS
To be added
AFFILIATIONS
To be added
AIM
To be added
Disclaimer
I (Jake Martin) am dyslexic. I have made an effort to review the script
for spelling errors, but some will likely remain. I apologise. Please
reach out via the contact details below if anything is unclear.
These are the settings for the html output. We will use the html to make out index file on Git
#kniter seetting
knitr::opts_chunk$set(
message = FALSE,
warning = FALSE, # no warnings
cache = TRUE,# Cacheing to save time when kniting
tidy = TRUE
)
Jake M. Martin
Email: jake.martin@deakin.edu.au (or jake.martin.research@gmail.com)
These are the R packages required for this script. You will need to install a package called pacman to run the p_load function.
# this installs and load packages
# need to install pacman
pacman::p_load("ggplot2",
"ggthemes",
"ggfortify",
"gtExtras",
"igraph",
"dagitty",
"ggdag",
"ggridges",
"gghalves",
"ggExtra",
"gridExtra",
"corrplot",
"RColorBrewer",
"gt",
"gtsummary",
"grid",
"plotly",
"bayesplot", # data visualisation
"tidyverse",
"janitor",
"readxl",
"broom.mixed",
"data.table",
"devtools",
"hms", # data tidy
"marginaleffects",
"brms",
"rstan",
"performance",
"emmeans",
"tidybayes",
"vegan",
"betareg",
"lme4",
"car",
"lmerTest",
"qqplotr",
"respirometry",
"mclust",
"furrr", # modelling
"datawizard",
"SRS", # data manipulation
"sessioninfo" # reproducibility
)
Here are some custom function used within this script.
bayes_incremental_regression_by_id(): A custom function
to build Bayesian incremental regressions. It is formatted for running a
list of sub group models (ids) in parallel with 4 cores. It uses
brm() with a Gaussian error function.
Use: The function takes an grouping factor/id that will be used to filter the data for the regression, if none is proved it will use all data (id_i), the column name of the grouping factor/charter in the data frame, if none is proved it will use all data (id_name), the data frame (data), the predictor of interest (predictor), the response of interest (response), the random seed number for model reproducibility (seed_number), where you would like to save the model outputs (save_models = logical argument), the output directory for the saved rds models (mod_output_wd)
bayes_incremental_regression_by_id(id_i = “your_id”, id_name = “column name for ids”, data = “your data”, predictor = “predictor variable”, response = “response variable”, seed_number = integer, save_models = TRUE/FALSE, mod_output_wd = “directory for model output”)
# Instead we could also use a distributional regression approach, by
# specifically modelling the variance by DO (e.g. sigma ~ DO). Weighting may
# not be required in this case, I don't think higher density of vaules in a
# given space will effect Bayesian estimates like it does in frequentist
# models. See discourse https://discourse.mc-stan.org/t/weights-in-brm/4278
bayes_incremental_regression_by_id <- function(id_i, id_name, data, predictor, response,
seed_number, save_models, mod_output_wd) {
# Initiate an empty list to store models
models <- list()
# Check if id_name is missing, NULL, or blank, and assign NA if so
if (missing(id_name) || is.null(id_name) || id_name == "") {
id_name <- NA
}
# Check if id_i is missing, NULL, or blank, and assign NA if so
if (missing(id_i) || is.null(id_i) || id_i == "") {
id_name <- NA
}
# Filter data for the current ID if id_name is given as a factor or
# character and id_i is defined
df_i <- data %>%
dplyr::filter(if (!is.na(id_i) && (is.factor(data[[id_name]]) || is.character(data[[id_name]]))) {
!!rlang::sym(id_name) == id_i
} else {
TRUE
})
# Dynamically create formulas
formula_lm_0 <- reformulate("1", response)
formula_lm_1 <- reformulate(predictor, response)
formula_lm_2 <- reformulate(sprintf("poly(%s, 2)", predictor), response)
formula_lm_3 <- reformulate(sprintf("poly(%s, 3)", predictor), response)
# Fit and store models in the list
models[[paste0(id_i, "_lm_0")]] <- brm(bf(formula_lm_0, family = gaussian()),
data = df_i, cores = 4, seed = seed_number, save_pars = save_pars(all = save_models),
sample_prior = FALSE, silent = TRUE, file = paste0(mod_output_wd, "/", id_i,
"_lm_0"))
models[[paste0(id_i, "_lm_1")]] <- brm(bf(formula_lm_1, family = gaussian()),
data = df_i, cores = 4, seed = seed_number, save_pars = save_pars(all = save_models),
sample_prior = FALSE, silent = TRUE, file = paste0(mod_output_wd, "/", id_i,
"_lm_1"))
models[[paste0(id_i, "_lm_2")]] <- brm(bf(formula_lm_2, family = gaussian()),
data = df_i, cores = 4, seed = seed_number, save_pars = save_pars(all = save_models),
sample_prior = FALSE, silent = TRUE, file = paste0(mod_output_wd, "/", id_i,
"_lm_2"))
models[[paste0(id_i, "_lm_3")]] <- brm(bf(formula_lm_3, family = gaussian()),
data = df_i, cores = 4, seed = 143019, save_pars = save_pars(all = save_models),
sample_prior = FALSE, silent = TRUE, file = paste0(mod_output_wd, "/", id_i,
"_lm_3"))
# Return the list of models for the current ID
return(models)
}
load_rds9(): A custom function to load all rds models in
a directory and store in a list. Takes a directory with ‘.rds’ files
load_rds <- function(model_dw) {
# List all .rds files in the directory
model_file_list <- list.files(path = model_dw, pattern = "\\.rds$", full.names = TRUE)
# Initialise an empty list to store models
model_store_list <- list()
# Iterate through each file and load the RDS
for (mod_i in model_file_list) {
mod <- readRDS(file = mod_i) # Read the RDS file
model_name <- tools::file_path_sans_ext(basename(mod_i)) # Extract the file name without extension
model_store_list[[model_name]] <- mod # Store it in the list
}
# Return the list of models
return(model_store_list)
}
incremental_regression_bayes_fits(): A custom function
for pulling model fits, loo and r2 using loo() and
bayes_R2(), respectively. Takes a list of models.
# Define Function to Process the data for each ID
incremental_regression_bayes_fits <- function(models) {
loo_results_list <- list()
# Iterate over the names of the models
for (mod_name in names(models)) {
# Extract the model
mod_i <- models[[mod_name]]
# Compute LOO results
mod_loo_results_i <- loo::loo(mod_i)
# Extract relevant LOO metrics
elpd_loo_i <- mod_loo_results_i$elpd_loo
p_loo_i <- mod_loo_results_i$p_loo
looic_i <- mod_loo_results_i$looic
# Create a data frame with metrics
df_i <- data.frame(
elpd_loo = elpd_loo_i,
p_loo = p_loo_i,
looic = looic_i,
model = mod_name
)
est_i <- tidy(mod_i, effects = "fixed", conf.int = TRUE) %>%
dplyr::select(term, estimate, conf.low, conf.high) %>%
tidyr::pivot_wider(
names_from = term, # Use `term` as column names
values_from = c(estimate, conf.low, conf.high), # Values to pivot
names_sep = "_" # Add a separator to column names
)
df_i <- cbind(df_i, est_i)
# Store the data frame in the list
loo_results_list[[mod_name]] <- df_i
}
# Combind
loo_results_combined <- bind_rows(loo_results_list)
# Get R2
r2_results <- map_dfr(models, ~ as.data.frame(bayes_R2(.x)), .id = "model") %>%
tibble::remove_rownames()
# Combind R2 and loo results
model_fit_df <- dplyr::full_join(loo_results_combined, r2_results, by = "model") %>%
dplyr::select(model, everything()) %>%
dplyr::rename(r2 = Estimate,
r2_error = Est.Error,
r2_q2.5 = Q2.5,
r2_q97.5 = Q97.5) %>%
dplyr::mutate(id = sub("_(lm_\\d+)$", "", model),
model_type = sub("^.*_(lm_\\d+)$", "\\1", model))
return(model_fit_df)
}
bayes_mod_predictions(): This function extracts the
predicted values using fitted() from a list of models and
combines them with the original data file used for the model. These are
the posterior mean fitted values (i.e., the expected
value of the response variable given the predictor variables and the
estimated posterior distributions of the parameters) for each
observation in the dataset, along with 95% credible
intervals.
bayes_mod_predictions <- function(models, original_data) {
prediction_list <- list()
for (mod_name in names(models)) {
# Extract mod
mod_i <- models[[mod_name]]
# Make mode predictions
model_predictions_i <- as.data.frame(fitted(mod_i, summary = TRUE)) %>%
dplyr::mutate(model = mod_name, id = sub("_(lm_\\d+)$", "", mod_name),
model_type = sub("^.*_(lm_\\d+)$", "\\1", mod_name)) %>%
dplyr::rename(pred_lower = Q2.5, pred_upper = Q97.5, predicted = Estimate,
pred_error = Est.Error) %>%
dplyr::select(model, everything())
id_i <- model_predictions_i$id[1]
original_data_i <- original_data %>%
dplyr::filter(id == id_i)
model_predictions_original_i <- cbind(model_predictions_i, original_data_i)
prediction_list[[mod_name]] <- model_predictions_original_i
}
predictions_df <- bind_rows(prediction_list)
return(predictions_df)
}
calcSMR(): authored by Denis Chabot
used to estimate SMR with several different methods Claireaux and Chabot
(2016) [1]
[1] Claireaux, G. and Chabot, D. (2016) Responses by fishes to environmental hypoxia: integration through Fry’s concept of aerobic metabolic scope. Journal of Fish Biology https://doi.org/10.1111/jfb.12833
calcSMR = function(Y, q = c(0.1, 0.15, 0.2, 0.25, 0.3), G = 1:4) {
u = sort(Y)
the.Mclust <- Mclust(Y, G = G)
cl <- the.Mclust$classification
# sometimes, the class containing SMR is not called 1 the following
# presumes that when class 1 contains > 10% of cases, it contains SMR,
# otherwise we take class 2
cl2 <- as.data.frame(table(cl))
cl2$cl <- as.numeric(levels(cl2$cl))
valid <- cl2$Freq >= 0.1 * length(time)
the.cl <- min(cl2$cl[valid])
left.distr <- Y[the.Mclust$classification == the.cl]
mlnd = the.Mclust$parameters$mean[the.cl]
CVmlnd = sd(left.distr)/mlnd * 100
quant = quantile(Y, q)
low10 = mean(u[1:10])
low10pc = mean(u[6:(5 + round(0.1 * (length(u) - 5)))])
# remove 5 outliers, keep lowest 10% of the rest, average Herrmann & Enders
# 2000
return(list(mlnd = mlnd, quant = quant, low10 = low10, low10pc = low10pc, cl = cl,
CVmlnd = CVmlnd))
}
calcO2crit(): authored by Denis Chabot
used to estimate O2crit (Pcript). Claireaux and Chabot (2016)
[1]
[1] Claireaux, G. and Chabot, D. (2016) Responses by fishes to environmental hypoxia: integration through Fry’s concept of aerobic metabolic scope. Journal of Fish Biology https://doi.org/10.1111/jfb.12833
Note: O2 is assumed to be in percentage of dissolved oxygen (DO)
calcO2crit <- function(Data, SMR, lowestMO2 = NA, gapLimit = 4, max.nb.MO2.for.reg = 20) {
# AUTHOR: Denis Chabot, Institut Maurice-Lamontagne, DFO, Canada first
# version written in June 2009 last updated in January 2015
method = "LS_reg" # will become 'through_origin' if intercept is > 0
if (is.na(lowestMO2))
lowestMO2 = quantile(Data$MO2[Data$DO >= 80], p = 0.05)
# Step 1: identify points where MO2 is proportional to DO
geqSMR = Data$MO2 >= lowestMO2
pivotDO = min(Data$DO[geqSMR])
lethal = Data$DO < pivotDO
N_under_SMR = sum(lethal) # points available for regression?
final_N_under_SMR = lethal # some points may be removed at Step 4
lastMO2reg = Data$MO2[Data$DO == pivotDO] # last MO2 when regulating
if (N_under_SMR > 1)
theMod = lm(MO2 ~ DO, data = Data[lethal, ])
# Step 2, add one or more point at or above SMR 2A, when there are fewer
# than 3 valid points to calculate a regression
if (N_under_SMR < 3) {
missing = 3 - sum(lethal)
not.lethal = Data$DO[geqSMR]
DOlimit = max(sort(not.lethal)[1:missing]) # highest DO acceptable
# to reach a N of 3
addedPoints = Data$DO <= DOlimit
lethal = lethal | addedPoints
theMod = lm(MO2 ~ DO, data = Data[lethal, ])
}
# 2B, add pivotDO to the fit when Step 1 yielded 3 or more values?
if (N_under_SMR >= 3) {
lethalB = Data$DO <= pivotDO # has one more value than 'lethal'
regA = theMod
regB = lm(MO2 ~ DO, data = Data[lethalB, ])
large_slope_drop = (coef(regA)[2]/coef(regB)[2]) > 1.1 # arbitrary
large_DO_gap = (max(Data$DO[lethalB]) - max(Data$DO[lethal])) > gapLimit
tooSmallMO2 = lastMO2reg < SMR
if (!large_slope_drop & !large_DO_gap & !tooSmallMO2)
{
lethal = lethalB
theMod = regB
} # otherwise we do not accept the additional point
}
# Step 3 if the user wants to limit the number of points in the regression
if (!is.na(max.nb.MO2.for.reg) & sum(lethal) > max.nb.MO2.for.reg) {
Ranks = rank(Data$DO)
lethal = Ranks <= max.nb.MO2.for.reg
theMod = lm(MO2 ~ DO, data = Data[lethal, ])
final_N_under_SMR = max.nb.MO2.for.reg
}
# Step 4
predMO2 = as.numeric(predict(theMod, data.frame(DO = Data$DO)))
Data$delta = (Data$MO2 - predMO2)/predMO2 * 100 # residuals set to zero
# when below pivotDO
Data$delta[Data$DO < pivotDO | lethal] = 0
tol = 0 # any positive residual is unacceptable
HighValues = Data$delta > tol
Ranks = rank(-1 * Data$delta)
HighMO2 = HighValues & Ranks == min(Ranks) # keep largest residual
if (sum(HighValues) > 0)
{
nblethal = sum(lethal)
Data$W = NA
Data$W[lethal] = 1/nblethal
Data$W[HighMO2] = 1
theMod = lm(MO2 ~ DO, weight = W, data = Data[lethal | HighMO2, ])
# This new regression is always an improvement, but there can still
# be points above the line, so we repeat
predMO2_2 = as.numeric(predict(theMod, data.frame(DO = Data$DO)))
Data$delta2 = (Data$MO2 - predMO2_2)/predMO2_2 * 100
Data$delta2[Data$DO < pivotDO] = 0
tol = Data$delta2[HighMO2]
HighValues2 = Data$delta2 > tol
if (sum(HighValues2) > 0)
{
Ranks2 = rank(-1 * Data$delta2)
HighMO2_2 = HighValues2 & Ranks2 == 1 # keep the largest residual
nblethal = sum(lethal)
Data$W = NA
Data$W[lethal] = 1/nblethal
Data$W[HighMO2_2] = 1
theMod2 = lm(MO2 ~ DO, weight = W, data = Data[lethal | HighMO2_2,
])
# is new slope steeper than the old one?
if (theMod2$coef[2] > theMod$coef[2]) {
theMod = theMod2
HighMO2 = HighMO2_2
}
} # end second search for high value
} # end first search for high value
Coef = coefficients(theMod)
# Step 5, check for positive intercept
AboveOrigin = Coef[1] > 0
# if it is, we use a regression that goes through the origin
if (AboveOrigin) {
theMod = lm(MO2 ~ DO - 1, data = Data[lethal, ])
Coef = c(0, coefficients(theMod)) # need to add the intercept (0)
# manually to have a pair of coefficients
method = "through_origin"
HighMO2 = rep(FALSE, nrow(Data)) # did not use the additional value
# from Step 4
}
po2crit = as.numeric(round((SMR - Coef[1])/Coef[2], 1))
sum_mod = summary(theMod)
anov_mod = anova(theMod)
O2CRIT = list(o2crit = po2crit, SMR = SMR, Nb_MO2_conforming = N_under_SMR, Nb_MO2_conf_used = final_N_under_SMR,
High_MO2_required = sum(HighMO2) == 1, origData = Data, Method = method,
mod = theMod, r2 = sum_mod$r.squared, P = anov_mod$"Pr(>F)", lethalPoints = which(lethal),
AddedPoints = which(HighMO2))
} # end function
plotO2crit(): authored by Denis Chabot,
used to plot the modes used for the calcO2crit() function.
Claireaux and Chabot (2016) [1]
Note: I added abline(h=lowestMO2, col=“pink”) so that I could visualise the lowestMO2 position
plotO2crit <- function(o2critobj, plotID = "", Xlab = "Dissolved oxygen (% sat.)",
Ylab = "dotitalumol", smr.cex = 0.9, o2crit.cex = 0.9, plotID.cex = 1.2, Transparency = T,
...) {
# AUTHOR: Denis Chabot, Institut Maurice-Lamontagne, DFO, Canada first
# version written in June 2009 last updated 2015-02-09 for R plotting
# devices that do not support transparency (e.g., postscript), set
# Transparency to FALSE
smr = o2critobj$SMR
if (Ylab %in% c("dotitalumol", "italumol", "dotumol", "umol", "dotitalmg", "italmg",
"dotmg", "mg")) {
switch(Ylab, dotitalumol = {
mo2.lab = expression(paste(italic(dot(M))[O[2]], " (", mu, "mol ", O[2],
" ", min^-1, " ", kg^-1, ")"))
}, italumol = {
mo2.lab = expression(paste(italic(M)[O[2]], " (", mu, "mol ", O[2], " ",
min^-1, " ", kg^-1, ")"))
}, dotumol = {
mo2.lab = expression(paste(dot(M)[O[2]], " (", mu, "mol ", O[2], " ",
min^-1, " ", kg^-1, ")"))
}, umol = {
mo2.lab = expression(paste(M[O[2]], " (", mu, "mol ", O[2], " ", min^-1,
" ", kg^-1, ")"))
}, dotitalmg = {
mo2.lab = expression(paste(italic(dot(M))[O[2]], " (mg ", O[2], " ",
h^-1, " ", kg^-1, ")"))
}, italmg = {
mo2.lab = expression(paste(italic(M)[O[2]], " (mg ", O[2], " ", h^-1,
" ", kg^-1, ")"))
}, dotmg = {
mo2.lab = expression(paste(dot(M)[O[2]], " (mg ", O[2], " ", h^-1, " ",
kg^-1, ")"))
}, mg = {
mo2.lab = expression(paste(M[O[2]], " (mg ", O[2], " ", h^-1, " ", kg^-1,
")"))
})
} else mo2.lab = Ylab
if (Transparency) {
Col = c(rgb(0, 0, 0, 0.7), "red", "orange")
} else {
Col = c(grey(0.3), "red", "orange")
}
Data = o2critobj$origData
lowestMO2 = quantile(Data$MO2[Data$DO >= 80], p = 0.05) # I added this
Data$Color = Col[1]
Data$Color[o2critobj$lethalPoints] = Col[2]
Data$Color[o2critobj$AddedPoints] = Col[3]
# ordinary LS regression without added points: blue line, red symbols
# ordinary LS regression with added points: blue line, red & orange symbols
# regression through origin: green dotted line, red symbols
line.color = ifelse(o2critobj$Method == "LS_reg", "blue", "darkgreen")
line.type = ifelse(o2critobj$Method == "LS_reg", 1, 3)
limX = c(0, max(Data$DO))
limY = c(0, max(Data$MO2))
plot(MO2 ~ DO, data = Data, xlim = limX, ylim = limY, col = Data$Color, xlab = Xlab,
ylab = mo2.lab, ...)
coord <- par("usr")
if (plotID != "") {
text(0, coord[4], plotID, cex = plotID.cex, adj = c(0, 1.2))
}
abline(h = lowestMO2, col = "pink") # I added this
abline(h = smr, col = "orange")
text(coord[1], smr, "SMR", adj = c(-0.1, 1.3), cex = smr.cex)
text(coord[1], smr, round(smr, 1), adj = c(-0.1, -0.3), cex = smr.cex)
if (!is.na(o2critobj$o2crit)) {
abline(o2critobj$mod, col = line.color, lty = line.type)
segments(o2critobj$o2crit, smr, o2critobj$o2crit, coord[3], col = line.color,
lwd = 1)
text(x = o2critobj$o2crit, y = 0, o2critobj$o2crit, col = line.color, cex = o2crit.cex,
adj = c(-0.1, 0.5))
}
} # end of function
meta_files_wd: Directory for the metadata
wd <- getwd()
meta_files_wd <- paste0(wd, "./meta-data") # creates a variable with the name of the wd we want to use
labchart_wd: Directory for Labchart estimated slopes
labchart_wd <- paste0(wd, "./lab-chart-slopes")
mod_data_wd: Directory for model output data estimated slopes
mod_data_wd <- paste0(wd, "./mod-data")
output_fig_wd: this is where we will put the figures
## [1] "Folder already exists"
output_mods_wd: this is where we will put the models
## [1] "Folder already exists"
slope_df: We have imported the slopes extracted in LabChart during each phase of the experiment
setwd(labchart_wd)
#
# # Get the names of all sheets in the Excel file
sheet_names <- excel_sheets("labchart-all-dates_v2.xlsx")
all_trials_select <- c("start_date", "order", "phase", "cycle", "date", "time")
slope_list <- list()
for (sheet in sheet_names) {
df <- read_excel("labchart-all-dates_v2.xlsx", sheet = sheet) %>%
dplyr::rename_with(tolower)
a_name <- paste0("a_", tolower(sheet))
a_df <- df %>%
dplyr::select(starts_with('a'), all_trials_select) %>%
dplyr::rename(temp = a_temp) %>%
dplyr::mutate(across(starts_with('a'), as.numeric)) %>%
pivot_longer(
cols = starts_with('a'), # Select all columns to pivot
names_to = c("chamber_id", ".value"), # Separate column names into 'id' and other variables
names_sep = "_"
) %>%
dplyr::mutate(respirometer_group = "a") # Add a new column with a fixed value
slope_list[[a_name]]<- a_df
b_name <- paste0("b_", tolower(sheet))
b_df <- df %>%
dplyr::select(starts_with('b'), all_trials_select) %>%
dplyr::rename(temp = b_temp) %>%
dplyr::mutate(across(starts_with('b'), as.numeric)) %>%
pivot_longer(
cols = starts_with('b'), # Select all columns to pivot
names_to = c("chamber_id", ".value"), # Separate column names into 'id' and other variables
names_sep = "_"
) %>%
dplyr::mutate(respirometer_group = "b")
slope_list[[b_name]] <- b_df
c_name <- paste0("c_", tolower(sheet))
c_df <- df %>%
dplyr::select(starts_with('c'), all_trials_select) %>%
dplyr::rename(temp = c_temp,
i_cycle = cycle) %>%
dplyr::mutate(across(starts_with('c'), as.numeric)) %>%
pivot_longer(
cols = starts_with('c'), # Select all columns to pivot
names_to = c("chamber_id", ".value"), # Separate column names into 'id' and other variables
names_sep = "_"
) %>%
dplyr::mutate(respirometer_group = "c") %>%
dplyr::rename(cycle = i_cycle)
slope_list[[c_name]] <- c_df
d_name <- paste0("d_", tolower(sheet))
d_df <- df %>%
dplyr::select(starts_with('d'), all_trials_select) %>%
dplyr::rename(temp = d_temp,
i_date = date) %>%
dplyr::mutate(across(starts_with('d'), as.numeric)) %>%
pivot_longer(
cols = starts_with('d'), # Select all columns to pivot
names_to = c("chamber_id", ".value"), # Separate column names into 'id' and other variables
names_sep = "_"
) %>%
dplyr::mutate(respirometer_group = "d") %>%
dplyr::rename(date = i_date)
slope_list[[d_name]] <- d_df
}
slope_df <- bind_rows(slope_list) %>%
dplyr::mutate(resp_cat_date = paste0(respirometer_group, "_", start_date),
chamber_n = str_extract(chamber_id, "\\d+"),
id_prox = paste0(resp_cat_date, "_", chamber_n),
time_hms = as_hms(time*3600),
date_chr = format(date, "%d/%m/%Y")
)
metadata: This is the meta data for each chamber
Note: We are also adding volume based on chamber type.
setwd(meta_files_wd)
metadata <- read_excel("Morpho.xlsx", na = "NA") %>%
dplyr::mutate(id_split = id) %>%
tidyr::separate(id_split, into = c("respirometer_group", "salinity_group", "start_date",
"chamber"), sep = "_") %>%
dplyr::mutate(volume = dplyr::case_when(chamber_type == "L" ~ 0.3, chamber_type ==
"M_M" ~ 0.105, chamber_type == "M_NM" ~ 0.11, chamber_type == "S" ~ 0.058,
chamber_type == "SM" ~ 0.075, chamber_type == "D3" ~ 0.055, TRUE ~ NA), id_prox = paste0(respirometer_group,
"_", start_date, "_", chamber), rel_size = mass/volume)
urbina_et_al_2012: This is the mean level data extracted from Urbina, Glover, and Forster (2012)[2] Figure 1a. We used the metaDigitise package in R to extract the data [3].
setwd(meta_files_wd)
urbina_et_al_2012 <- read.csv("urbina-et-al-2012-fig1a.csv")
Adding the meta data to LabChart slopes
slope_df_2 <- slope_df %>%
dplyr::select(-start_date, -respirometer_group) %>%
left_join(metadata, by = "id_prox") %>%
dplyr::mutate(light_dark = if_else(time_hms >= as.hms("07:00:00") & time_hms <
as.hms("19:00:00"), "light", "dark")) %>%
dplyr::arrange(id)
We have 64 fish with MO2 data
n <- slope_df_2 %>%
dplyr::filter(chamber_condition == "fish") %>%
dplyr::distinct(id) %>%
nrow(.)
paste0("n = ", n)
## [1] "n = 64"
With 48 from the 0 ppt and 48 from 9 ppt groups
slope_df_2 %>%
dplyr::filter(chamber_condition == "fish") %>%
dplyr::group_by(salinity_group) %>%
dplyr::reframe(`n total` = length(unique(id))) %>%
gt() %>%
cols_label(salinity_group = "Salinity group") %>%
cols_align(align = "center", columns = everything())
| Salinity group | n total |
|---|---|
| 0 | 33 |
| 9 | 31 |
Here we calculate the mean length and size of fish used in the experiment.
mass_length <- slope_df_2 %>%
dplyr::filter(chamber_condition == "fish") %>%
dplyr::group_by(id) %>%
dplyr::sample_n(1) %>%
dplyr::ungroup() %>%
dplyr::reframe(x_mass = round(mean(mass, na.rm = TRUE), 3), min_mass = round(min(mass,
na.rm = TRUE), 3), max_mass = round(max(mass, na.rm = TRUE), 3), x_length = round(mean(length,
na.rm = TRUE), 2), min_length = round(min(length, na.rm = TRUE), 2), max_length = round(max(length,
na.rm = TRUE), 2))
mass_mean <- mass_length %>%
pull(x_mass)
mass_min <- mass_length %>%
pull(min_mass)
mass_max <- mass_length %>%
pull(max_mass)
length_mean <- mass_length %>%
pull(x_length)
length_min <- mass_length %>%
pull(min_length)
length_max <- mass_length %>%
pull(max_length)
paste0("The mean mass of fish was ", mass_mean, " g (range: ", mass_min, "-", mass_max,
")", ", and the mean length was ", length_mean, " mm (range: ", length_min, "-",
length_max, ")")
## [1] "The mean mass of fish was 0.532 g (range: 0.21-1.6), and the mean length was 50.41 mm (range: 40-70)"
Here we calculate the mean length and size of fish used in the experiment
mass_rel_length <- slope_df_2 %>%
dplyr::filter(chamber_condition == "fish") %>%
dplyr::group_by(id) %>%
dplyr::sample_n(1) %>%
dplyr::ungroup() #%>%
# dplyr::reframe(mean_rel_size = round(mean(rel_size, na.rm = TRUE), 3),
# min_rel_size = round(min(rel_size, na.rm = TRUE), 3), max_rel_size =
# round(max(rel_size, na.rm = TRUE), 3)) rel_size_mean <- mass_rel_length %>%
# pull(mean_rel_size) rel_size_min <- mass_rel_length %>% pull(min_rel_size)
# rel_size_max <- mass_rel_length %>% pull(max_rel_size) paste0('The mean
# relative size of fish to thier respirometry chamber (mass g per volume L) was
# ', rel_size_mean, ' (range: ', rel_size_min, '-', rel_size_max, ')')
We will remove 6 trials which had errors. These are as follows:
remove_trial_error <- c("a_0_25nov_3", "b_0_26nov_4", "c_0_22nov_2", "c_9_26nov_2",
"c_9_26nov_4", "d_9_27nov_3")
slope_df_filter <- slope_df_2 %>%
dplyr::filter(!(id %in% remove_trial_error))
We now have 58 fish with MO2 data
n <- slope_df_filter %>%
dplyr::filter(chamber_condition == "fish") %>%
dplyr::distinct(id) %>%
nrow(.)
paste0("n = ", n)
## [1] "n = 58"
With 30 in the 0 ppt group and 28 in the 9 ppt group
slope_df_filter %>%
dplyr::filter(chamber_condition == "fish") %>%
dplyr::group_by(salinity_group) %>%
dplyr::reframe(`n total` = length(unique(id))) %>%
gt() %>%
cols_label(salinity_group = "Salinity group") %>%
cols_align(align = "center", columns = everything())
| Salinity group | n total |
|---|---|
| 0 | 30 |
| 9 | 28 |
Here we apply the following filters to the MO2 data:
Check positive values for MO2 before removing.
slope_tidy_remove_flush <- slope_df_filter %>%
dplyr::filter(phase != "smr", chamber_condition == "fish") %>%
dplyr::group_by(id) %>%
dplyr::arrange(id, order) %>% # Ensure the data is sorted within each group
dplyr::mutate(o2_diff = if_else(row_number() == 1, 0, o2 - lag(o2)), # Calculate the difference in 'o2'
o2_diff_cumsum = cumsum(o2_diff > 1)) %>% # Checks first occurrence and sums
dplyr::filter(o2_diff_cumsum == 0) %>% # Keep rows until the first jump > 1
dplyr::ungroup() %>%
dplyr::select(-o2_diff, -o2_diff_cumsum)
postive_slopes <- slope_tidy_remove_flush %>%
dplyr::filter(chamber_condition == "fish" & mo2corr > 0)
list_postive_all <- slope_df_filter %>%
dplyr::filter(chamber_condition == "fish" & mo2corr > 0 & phase == "smr") %>%
dplyr::bind_rows(., postive_slopes) %>%
dplyr::distinct(id) %>%
dplyr::pull(id)
print(paste0("There are ", length(list_postive_all), " fish with postive slopes. These fish are: ", paste(list_postive_all, collapse = ", "), "."))
## [1] "There are 15 fish with postive slopes. These fish are: a_9_22nov_1, d_0_21nov_3, a_0_24nov_1, b_0_24nov_3, b_9_21nov_2, b_9_21nov_3, b_9_22nov_1, b_9_22nov_2, b_9_22nov_3, c_0_21nov_1, c_0_21nov_2, c_0_22nov_4, c_9_24nov_3, d_0_22nov_2, d_9_25nov_3."
Filtering the MO2 data
cycle_burn <- 0:4
slope_df_filter_1 <- slope_df_filter %>%
dplyr::filter(!(cycle %in% cycle_burn) &
mo2corr < 0 &
n > 60 &
chamber_condition == "fish"
)
# Filter out the end flush
slope_tidy_closed <- slope_df_filter_1 %>%
dplyr::filter(phase != "smr") %>%
dplyr::group_by(id) %>%
dplyr::arrange(id, order) %>% # Ensure the data is sorted within each group
dplyr::mutate(o2_diff = if_else(row_number() == 1, 0, o2 - lag(o2)), # Calculate the difference in 'o2'
o2_diff_cumsum = cumsum(o2_diff > 1)) %>% # Checks first occurrence and sums
dplyr::filter(o2_diff_cumsum == 0) %>% # Keep rows until the first jump > 1
dplyr::ungroup() %>%
dplyr::select(-o2_diff, -o2_diff_cumsum)
slope_tidy_smr <- slope_df_filter_1 %>%
dplyr::filter(phase == "smr")
slope_df_filter_2 <- rbind(slope_tidy_smr, slope_tidy_closed) %>%
dplyr::arrange(id, order)
We will calculate SMR using calcSMR function by Chabot, Steffensen and Farrell (2016)[1]. Specifically, we use mean of the lowest normal distribution (MLND) where CVmlnd < 5.4 and the mean of the lower 20% quantile (q0.2) were CVmlnd > 5.4. If CVmlnd is not calculated we have used q0.2.
labchart_chabot_smr <- slope_df_filter_2 %>%
dplyr::filter(phase == "smr")
# Extract distinct IDs
ids <- labchart_chabot_smr %>%
dplyr::distinct(id) %>%
dplyr::pull()
# Initialise an empty list to store SMR data
smr_list <- list()
# Process each ID
for (id_i in ids) {
tryCatch({
# Filter the data for the current ID
df_i <- labchart_chabot_smr %>%
dplyr::filter(id == id_i) %>%
dplyr::mutate(abs_mo2corr = abs(mo2corr))
# Calculate SMR results
calcSMR_results <- calcSMR(df_i$abs_mo2corr)
CVmlnd_i <- calcSMR_results$CVmlnd
quant_i <- calcSMR_results$quant %>%
as_tibble()
quant_20per_i <- quant_i$value[3]
mlnd_i <- calcSMR_results$mlnd
smr_value <- if_else(CVmlnd_i < 5.4, mlnd_i, quant_20per_i)
smr_type <- if_else(CVmlnd_i < 5.4, "mlnd", "quant_20per")
smr_value <- if_else(is.na(smr_value), quant_20per_i, smr_value)
smr_type <- if_else(is.na(smr_type), "quant_20per", smr_type)
# Create a data frame for the current ID
smr_df <- tibble(id = id_i, smr = smr_value, smr_est = smr_type)
}, error = function(e) {
# Handle errors by assigning NA values
smr_df <- tibble(id = id_i, smr = NA, smr_est = NA)
})
# Append to the list
smr_list[[id_i]] <- smr_df
}
# Combine all individual SMR data frames into one
smr_df <- bind_rows(smr_list) %>%
dplyr::rename(smr_chabot = smr, smr_chabot_method = smr_est)
slope_df_filter_3 <- slope_df_filter_2 %>%
dplyr::left_join(., smr_df, by = "id")
Here we are transforming the MO2 units. The resulting values are as follows:
MO2 is absolute value of the background and leak corrected MO2 slope from Labchart (mo2corr) times the net volume of the chamber (volume - fish mass), × 60, × 60, to achieve MO2 as mg-1 O2 h-1
MO2_g is MO2 divided by fish mass to achieve MO2 as mg-1 O2 g-1 h-1 (i.e. mass standardised)
SMR absolute value of the SMR estimates using methods described by Chabot, Steffensen and Farrell (2016)[1] times the net volume of the chamber (volume - fish mass), × 60, × 60, to achieve SMR as mg-1 O2 g-1 h-1)
SMR_g is SMR divided by fish mass to achieve SMR as mg-1 O2 g-1 h-1 (i.e. mass standardised)
DO is dissolved oxygen percentage calculated from O2 values (mg-1 L-1) using the recorded temperature, salinity, and a constant atmospheric pressure (Pa; 1013.25)
o2_kpa is the O2 concentration in kilopascal (kpa). This is used to make a comparative figure only.
# Combine back into one data frame
slope_tidy <- slope_df_filter_3 %>%
dplyr::mutate(DO = conv_o2(
o2 = o2,
from = "mg_per_l",
to = "percent_a.s.",
temp = temp, #C
sal = measured_salinity,
atm_pres = 1013.25),
o2_kpa = conv_o2(
o2 = o2,
from = "mg_per_l",
to = "kPa",
temp = temp, #C
sal = measured_salinity,
atm_pres = 1013.25),
net_volume = volume - mass/1000,
MO2 = abs(mo2corr)*net_volume*60*60,
MO2_g = MO2/mass,
SMR = abs(smr_chabot)*net_volume*60*60,
SMR_g = SMR/mass
)
This was used to identify any outliers, or potential errors.
lm_lines <- slope_tidy %>%
group_by(id) %>%
summarise(
DO_seq = list(seq(min(DO), max(DO), length.out = 100)), # Generate a sequence of DO values
MO2_pred = list(predict(lm(MO2_g ~ DO, data = cur_data()), newdata = data.frame(DO = seq(min(DO), max(DO), length.out = 100))))
) %>%
unnest(c(DO_seq, MO2_pred)) # Expand lists into rows
# Create scatter plot with markers for each fish
p <- plot_ly(
data = slope_tidy,
x = ~DO,
y = ~MO2_g,
type = "scatter",
mode = "markers",
color = ~id, # Colour points by fish ID
marker = list(opacity = 0.6),
name = ~id
)
# Add regression lines for each fish
p <- p %>%
add_trace(
data = lm_lines,
x = ~DO_seq,
y = ~MO2_pred,
type = "scatter",
mode = "lines",
color = ~id, # Ensure each line matches its corresponding fish
line = list(width = 1, dash = "solid"),
showlegend = FALSE # Avoid cluttering the legend
)
# Final layout
p <- p %>%
layout(
title = "MO<sub>2</sub> vs Dissolved Oxygen with individual linear regressions",
xaxis = list(title = "Dissolved Oxygen (%)"),
yaxis = list(title = "MO<sub>2</sub> (mg<sup>-1</sup> O<sub>2</sub> g<sup>-1</sup> h<sup>-1</sup>)"),
showlegend = FALSE
)
# Display plot
p
Figure Si: interactive plot of metabolic rate measurements (MO2; mg O2 g-1h-1) by dissolved oxygen percentage (DO) for all fish, including all estimates during the SMR phase (i.e. intermittent phase)
Looking at the difference responses in the two salinity groups.
slope_tidy %>%
ggplot(aes(y = MO2_g, x = DO, colour = id)) + # Default aesthetics
geom_point(show.legend = FALSE) +
geom_smooth(aes(group = id), method = "lm", se = FALSE, colour = scales::alpha("black", 0.5)) + # Transparent black lines
geom_smooth(method = "lm", se = TRUE, colour = "red") + # Overall smooth line
geom_smooth(se = TRUE, colour = "red", linetype = "dashed") +
theme_clean() +
facet_wrap(~salinity_group) +
labs(
subtitle = "mo2 vs o2 by salinity treatment",
x = "Dissolved oxygen percentage (DO)",
y = "MO2 (O2 mg/g/h)"
)
Figure Si:
Looking at the different chamber types
slope_tidy %>%
ggplot(aes(y = MO2_g, x = DO, colour = id)) + # Default aesthetics
geom_point(show.legend = FALSE) +
geom_smooth(aes(group = id), method = "lm", se = FALSE, colour = scales::alpha("black", 0.5)) + # Transparent black lines
geom_smooth(method = "lm", se = TRUE, colour = "red") + # Overall smooth line
geom_smooth(se = TRUE, colour = "red", linetype = "dashed") +
theme_clean() +
facet_wrap(~chamber_type, scale = "free") +
labs(
subtitle = "mo2 vs o2 by chamber type",
x = "Dissolved oxygen percentage (DO)",
y = "MO2 (mg O2 g/h)"
)
Figure Si:
Here, we have recreated a similar plot to that presented in Urbina, Glover, and Forster (2012)[1] and have extracted the mean level data from Figure 1a using the metaDigitise package in R[3]. This data is called urbina_et_al_2012. This allows us to compare the differences in Mo2 and the relationship between Mo2 and O2.
First making a binned data frame
min_o2_kpa <- min(slope_tidy$o2_kpa, na.rm = TRUE)
max_o2_kpa <- max(slope_tidy$o2_kpa, na.rm = TRUE)
time_bin_df <- slope_tidy %>%
mutate(o2_group = cut(o2_kpa,
breaks = seq(min_o2_kpa, max_o2_kpa, length.out = 13), # 11 intervals, so 12 breakpoints
labels = paste0("Group ", 1:12),
include.lowest = TRUE)) %>%
dplyr::group_by(o2_group) %>%
dplyr::reframe(mean_MO2_g = mean(MO2_g)*31.25,
mean_o2_kpa = mean(o2_kpa),
n = length(MO2_g)*31.25,
MO2_g_sd = sd(MO2_g)*31.25,
o2_kpa_sd = sd(o2_kpa))
Figure Si:
Now the plot with our mean data and the mean data from Urbina, Glover, and Forster (2012)[1]
n <- slope_tidy %>%
dplyr::distinct(id) %>%
nrow(.)
# Existing plot
p <- time_bin_df %>%
ggplot() + geom_point(data = slope_tidy, aes(y = MO2_g * 31.25, x = o2_kpa),
size = 2, color = "grey", alpha = 0.3) + geom_point(data = time_bin_df, aes(y = mean_MO2_g,
x = mean_o2_kpa), size = 3, colour = "#0E4C92", show.legend = FALSE) + geom_errorbar(data = time_bin_df,
aes(y = mean_MO2_g, x = mean_o2_kpa, ymin = mean_MO2_g - MO2_g_sd, ymax = mean_MO2_g +
MO2_g_sd), width = 0.15, colour = "#0E4C92") + geom_errorbarh(data = time_bin_df,
aes(y = mean_MO2_g, x = mean_o2_kpa, xmin = mean_o2_kpa - o2_kpa_sd, xmax = mean_o2_kpa +
o2_kpa_sd), height = 0.4, colour = "#0E4C92") + geom_point(data = urbina_et_al_2012,
aes(x = o2_mean, y = mo2_mean), size = 3, shape = 21, fill = "#D21F3C", color = "#D21F3C",
stroke = 1) + geom_errorbar(data = urbina_et_al_2012, aes(x = o2_mean, ymin = mo2_mean -
mo2_sd, ymax = mo2_mean + mo2_sd), width = 0.2, colour = "#D21F3C") + geom_errorbarh(data = urbina_et_al_2012,
aes(y = mo2_mean, xmin = o2_mean - o2_sd, xmax = o2_mean + o2_sd), height = 0.4,
colour = "#D21F3C") + annotate("text", x = 0, y = 16, label = bquote(atop("Current data (blue), " *
italic(n) * " = " * .(n), "Urbina data (red), " * italic(n) * " = 67")), hjust = 0,
vjust = 1, size = 4) + theme_clean() + labs(subtitle = "", x = "PO2 (kPa)", y = "MO2 (umol O2 g/h)") +
scale_y_continuous(limits = c(0, 16), breaks = seq(0, 16, by = 2)) + scale_x_continuous(limits = c(0,
22), breaks = seq(0, 22, by = 2))
p
Figure S1: Metabolic rate (MO2) by oxygen concentration for O2 bins with Urbina (2012)[1]
Making an SMR phase only data frame
slope_tidy_smr <- slope_tidy %>%
dplyr::filter(phase == "smr")
mean_mo2_salinity <- slope_tidy_smr %>%
dplyr::group_by(salinity_group) %>%
dplyr::reframe(mean_mo2 = mean(MO2, na.rm = TRUE))
fig_i <- ggplot() +
geom_violin(data = slope_tidy_smr, aes(x = salinity_group, y = MO2, fill = salinity_group), color = NA, alpha = 0.3) +
geom_jitter(data = slope_tidy_smr, aes(x = salinity_group, y = MO2, fill = salinity_group),
shape = 21, size = 2, color = "black", alpha = 0.2) +
geom_boxplot(data = slope_tidy_smr, aes(x = salinity_group, y = MO2, fill = salinity_group),
size = 1, alpha = 0.5, outlier.shape = NA, width = 0.3) +
geom_point(data = mean_mo2_salinity,
aes(x = salinity_group, y = mean_mo2, fill = salinity_group),
size = 3, alpha = 0.8, colour = "black", stroke = 2) +
scale_fill_manual(values = c("#4B5320", "#000080")) + # Custom fill colours
scale_colour_manual(values = c("#4B5320", "#000080")) +
theme_clean() +
theme(legend.position = "none") +
labs(
subtitle = "",
x = "Salinity group (ppt)",
y = "Routine MO2 (mg O2 g/h)"
)
fig_i
Figure Si: Plot of all MO2 measures during SMR phase by salinity treatment. The small points are the observed values, the shaded area behind the points is the a kernel density of the observed data, the box plot on top show the median and interquartile range (IQR).
Here’s the same plot but for only the SMR, as estimated with calcSMR function by Chabot, Steffensen and Farrell (2016)[1]. Specifically, we use mean of the lowest normal distribution (MLND) where CVmlnd < 5.4 and the mean of the lower 20% quantile (q0.2) were CVmlnd > 5.4. If CVmlnd is not calculated we have used q0.2.
smr_only <- slope_tidy_smr %>%
dplyr::group_by(id) %>%
dplyr::slice(1)
mean_smr_salinity <- smr_only %>%
dplyr::group_by(salinity_group) %>%
dplyr::reframe(mean_smr = mean(SMR, na.rm = TRUE))
fig_i <- ggplot() +
geom_violin(data = smr_only, aes(x = salinity_group, y = SMR, fill = salinity_group), color = NA, alpha = 0.3) +
geom_jitter(data = smr_only, aes(x = salinity_group, y = SMR, fill = salinity_group),
shape = 21, size = 2, color = "black", alpha = 0.2) +
geom_boxplot(data = smr_only, aes(x = salinity_group, y = SMR, fill = salinity_group),
size = 1, alpha = 0.5, outlier.shape = NA, width = 0.3) +
geom_point(data = mean_smr_salinity,
aes(x = salinity_group, y = mean_smr, fill = salinity_group),
size = 3, alpha = 0.8, colour = "black", stroke = 2) +
scale_fill_manual(values = c("#4B5320", "#000080")) + # Custom fill colours
scale_colour_manual(values = c("#4B5320", "#000080")) +
theme_clean() +
theme(legend.position = "none") +
labs(
subtitle = "",
x = "Salinity group (ppt)",
y = "MO2 (mg O2 g/h)"
)
fig_i
Create output directory if needed
ids <- slope_tidy %>%
dplyr::distinct(id) %>%
pull(id) %>%
as.list()
MO2_plot_list <- list()
# 1) Open the PDF device once
pdf(file = file.path(output_fig_slopes_wd, "combined_slopes.pdf"), width = 8, height = 6)
# 2) Loop over IDs and create each plot
for (id_i in ids) {
smr <- slope_tidy %>%
dplyr::filter(id == id_i) %>%
dplyr::slice(1) %>%
dplyr::pull(SMR)
plot <- slope_tidy %>%
dplyr::filter(id == id_i) %>%
ggplot(aes(x = o2, y = MO2)) + geom_hline(yintercept = smr, linetype = "dashed",
color = "darkred") + geom_point(aes(colour = phase)) + theme_clean() + labs(subtitle = paste0(id_i,
" slopes"), x = "Mean o2 (mg_per_l)", y = "abs(mo2) (mg_per_l)")
# Instead of saving each plot separately, just print it
print(plot)
MO2_plot_list[[id_i]] <- plot
}
# 3) Close the PDF device *after* the loop
dev.off()
## png
## 2
for (p in MO2_plot_list) {
print(p)
}
Figure Si: MO2 by O2 for each fish with the SMR represented as a dashed red
Here we scale our predictors for the model
scale_list <- c("temp", "order", "mass")
slope_tidy_smr <- slope_tidy_smr %>%
dplyr::mutate(across(all_of(scale_list), ~scale(.x, center = TRUE, scale = FALSE),
.names = "{.col}_z"), light_dark_c = if_else(light_dark == "light", 0.5,
-0.5))
Here we will use a Bayesian Generalised Linear Mixed Model (GLMM) with a Gamma distribution and a log link, where the shape parameter (K) is also modelled as a function of predictors. This models MO2 by salinity during the SMR phase to see if the fish held at different salinities have different SMRs. We have also added a few scaled predictors, that may help describe variation in the data, such as mass (g; 0.21–1.6) temperature (°C; 13.841–14.277), measurement order (1–28), and light/dark cycle (light or dark; light between 07:00:00 and 19:00:00), we also include a random effect for fish id to account for multiple MO2 measures on each fish (1 - 58). We allowed the the shape parameter (K) to vary as a function of some of the predictors (e.g. salinity_group, order_z) to improve fit.
mo2_gamma_bf <- bf(MO2 ~ temp_z + order_z + light_dark_c + mass_z + salinity_group +
(1 | id), shape ~ salinity_group + order_z, family = Gamma(link = "log"))
These are the default priors. We will use these.
default_prior <- get_prior(mo2_gamma_bf, data = slope_tidy_smr, family = Gamma(link = "log"))
default_prior
## prior class coef group resp dpar nlpar lb ub
## (flat) b
## (flat) b light_dark_c
## (flat) b mass_z
## (flat) b order_z
## (flat) b salinity_group9
## (flat) b temp_z
## student_t(3, -2.7, 2.5) Intercept
## student_t(3, 0, 2.5) sd 0
## student_t(3, 0, 2.5) sd id 0
## student_t(3, 0, 2.5) sd Intercept id 0
## (flat) b shape
## (flat) b order_z shape
## (flat) b salinity_group9 shape
## student_t(3, 0, 2.5) Intercept shape
## source
## default
## (vectorized)
## (vectorized)
## (vectorized)
## (vectorized)
## (vectorized)
## default
## default
## (vectorized)
## (vectorized)
## default
## (vectorized)
## (vectorized)
## default
Using generic standardised weakly informative priors for all estimates (b)
priors_custom <- c(set_prior("normal(0, 1)", class = "b"), set_prior("student_t(3, 0, 10)",
class = "b", dpar = "shape"), set_prior("student_t(3, 0, 2.5)", class = "sd"),
set_prior("student_t(3, -2.8, 2.5)", class = "Intercept"), set_prior("student_t(3, 0, 2.5)",
class = "Intercept", dpar = "shape"))
priors_custom
## prior class coef group resp dpar nlpar lb ub source
## normal(0, 1) b <NA> <NA> user
## student_t(3, 0, 10) b shape <NA> <NA> user
## student_t(3, 0, 2.5) sd <NA> <NA> user
## student_t(3, -2.8, 2.5) Intercept <NA> <NA> user
## student_t(3, 0, 2.5) Intercept shape <NA> <NA> user
Here we run the model, I have hashed this out because I have saved the model for quick reloading.
## [1] "Model complete"
Prior only model
Here we reload the model
setwd(output_mods_wd)
mo2_mod_gamma <- readRDS(file = "mo2_mod_gamma.rds")
# mo2_mod_gamma_prior <- readRDS(file = 'mo2_mod_gamma_prior.rds')
Checking model convergence
color_scheme_set("red")
plot(mo2_mod_gamma, ask = F)
Checking rhat are equal to one
summary(mo2_mod_gamma)
## Family: gamma
## Links: mu = log; shape = log
## Formula: MO2 ~ temp_z + order_z + light_dark_c + mass_z + salinity_group + (1 | id)
## shape ~ salinity_group + order_z
## Data: slope_tidy_smr (Number of observations: 895)
## Draws: 4 chains, each with iter = 8000; warmup = 1000; thin = 2;
## total post-warmup draws = 14000
##
## Multilevel Hyperparameters:
## ~id (Number of levels: 58)
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS Tail_ESS
## sd(Intercept) 0.33 0.03 0.27 0.40 1.00 3788 6587
##
## Regression Coefficients:
## Estimate Est.Error l-95% CI u-95% CI Rhat Bulk_ESS
## Intercept -2.55 0.06 -2.67 -2.43 1.00 2577
## shape_Intercept 2.57 0.07 2.43 2.70 1.00 12311
## temp_z -0.34 0.22 -0.77 0.08 1.00 6520
## order_z 0.00 0.00 -0.00 0.00 1.00 11627
## light_dark_c 0.13 0.02 0.08 0.17 1.00 12338
## mass_z 1.24 0.17 0.91 1.57 1.00 3326
## salinity_group9 -0.10 0.09 -0.27 0.08 1.00 2313
## shape_salinity_group9 0.18 0.10 -0.02 0.38 1.00 11729
## shape_order_z -0.04 0.01 -0.06 -0.02 1.00 11486
## Tail_ESS
## Intercept 4682
## shape_Intercept 12782
## temp_z 9206
## order_z 11778
## light_dark_c 11954
## mass_z 5857
## salinity_group9 3563
## shape_salinity_group9 12124
## shape_order_z 11950
##
## Draws were sampled using sampling(NUTS). For each parameter, Bulk_ESS
## and Tail_ESS are effective sample size measures, and Rhat is the potential
## scale reduction factor on split chains (at convergence, Rhat = 1).
Looking at prior and predicted distributions for our parameters
# keep_parameters <- c('b_Intercept', 'b_shape_Intercept', 'b_temp_z',
# 'b_order_z', 'b_light_dark_c', 'b_mass_z', 'b_salinity_group9',
# 'b_shape_salinity_group9') prior_samples <-
# as_draws_df(mo2_mod_gamma_prior_2) posterior_samples <-
# as_draws_df(mo2_mod_gamma_2) posterior_samples$source <- 'Posterior'
# prior_samples$source <- 'Prior' combined_samples <-
# bind_rows(posterior_samples, prior_samples) # Convert to long format for
# ggplot long_samples <- pivot_longer(combined_samples, cols = -source,
# names_to = 'parameter', values_to = 'value') %>% dplyr::filter(parameter %in%
# keep_parameters) ggplot(long_samples, aes(x = value, fill = source)) +
# geom_density(alpha = 0.5) + # Overlay density plots facet_wrap(~parameter,
# scales = 'free') + # Separate by parameter labs(title = 'Prior vs Posterior
# Distributions', x = 'Parameter Value', y = 'Density') + theme_minimal()
Using leave one out (loo) measure of fit, the model appears to preform well, all Pareto k estimates are good (k < 0.7)
loo(mo2_mod_gamma)
##
## Computed from 14000 by 895 log-likelihood matrix.
##
## Estimate SE
## elpd_loo 2256.5 38.6
## p_loo 71.5 7.0
## looic -4513.0 77.2
## ------
## MCSE of elpd_loo is 0.1.
## MCSE and ESS estimates assume MCMC draws (r_eff in [0.7, 1.0]).
##
## All Pareto k estimates are good (k < 0.7).
## See help('pareto-k-diagnostic') for details.
Model predictions generally align with the observed data
color_scheme_set("red")
p <- pp_check(mo2_mod_gamma, type = "dens_overlay")
p
We did not see a meaningful difference between the routine metabolic rate for fish from the two salinity treatments.
Table Si: Fixed effect Estimates (β) and 95% Credible Intervals (95% CI)
model_est <- fixef(mo2_mod_gamma, probs = c(0.025, 0.975)) %>%
as.data.frame() %>%
tibble::rownames_to_column(var = "Predictor") %>%
dplyr::mutate(β = round(Estimate, 3), Q2.5 = round(Q2.5, 3), Q97.5 = round(Q97.5,
3), `95% CI` = paste0("[", Q2.5, ", ", Q97.5, "]"))
model_est %>%
dplyr::select(Predictor, "β", "95% CI") %>%
gt()
| Predictor | β | 95% CI |
|---|---|---|
| Intercept | -2.551 | [-2.673, -2.431] |
| shape_Intercept | 2.568 | [2.43, 2.7] |
| temp_z | -0.344 | [-0.768, 0.076] |
| order_z | 0.001 | [-0.002, 0.005] |
| light_dark_c | 0.128 | [0.08, 0.174] |
| mass_z | 1.237 | [0.908, 1.566] |
| salinity_group9 | -0.096 | [-0.272, 0.082] |
| shape_salinity_group9 | 0.179 | [-0.02, 0.379] |
| shape_order_z | -0.042 | [-0.061, -0.024] |
Looking at the marginal mean difference between salinity groups
em_results <- emmeans(mo2_mod_gamma, ~salinity_group)
contrast_results <- contrast(em_results, method = "pairwise")
em_results_df <- em_results %>%
tidy() %>%
mutate(across(where(is.numeric), ~exp(.)))
contrast_results_df <- contrast_results %>%
tidy() %>%
mutate(across(where(is.numeric), ~exp(.)))
em_results_df %>%
gt()
| salinity_group | estimate | lower.HPD | upper.HPD |
|---|---|---|---|
| 0 | 0.07805491 | 0.06933489 | 0.08823359 |
| 9 | 0.07092322 | 0.06228137 | 0.08069857 |
Pulling the emmeans draws for our plot
emmeans_draws <- mo2_mod_gamma %>%
emmeans(~salinity_group) %>%
gather_emmeans_draws() %>%
dplyr::mutate(.value = exp(.value), salinity_group = as.character(salinity_group))
emmeans_contrast_draws <- mo2_mod_gamma %>%
emmeans(~salinity_group) %>%
contrast(method = "pairwise") %>%
gather_emmeans_draws() %>%
dplyr::mutate(.value = exp(.value))
NOTE: This plot is in the main text of the manuscript as Figure 1a
mean_mo2_salinity <- slope_tidy_smr %>%
dplyr::group_by(salinity_group) %>%
dplyr::reframe(mean_mo2 = mean(MO2, na.rm = TRUE))
fig_1 <- ggplot() +
geom_violin(data = slope_tidy_smr,
aes(x = salinity_group, y = MO2, fill = salinity_group),
color = NA, alpha = 0.2) +
geom_jitter(data = slope_tidy_smr,
aes(x = salinity_group, y = MO2, fill = salinity_group),
shape = 21, width = 0.3, size = 1, color = "black", alpha = 0.1) +
geom_point(data = mean_mo2_salinity,
aes(x = salinity_group, y = mean_mo2, fill = salinity_group),
size = 4, alpha = 1, stroke = 2, color = "black", shape = 21,
position = position_nudge(x = 0.05)) +
stat_pointinterval(data = emmeans_draws,
aes(x = salinity_group, y = .value),
color = "black", fill = "grey", point_interval = "mean_qi", .width = 0.95, shape = 21, stroke = 2, point_size = 4, alpha = 1,
position = position_nudge(x = -0.05)) +
scale_fill_manual(values = c("#4B5320", "#000080")) + # Custom fill colours
scale_colour_manual(values = c("#4B5320", "#000080")) +
theme_clean() +
theme(legend.position = "none") +
labs(
subtitle = "",
x = "Salinity group (ppt)",
y = "Routine MO2 (mg O2 g/h)"
)
fig_1
Figure 1a: Routine metabolic rate (i.e. MO2 (mg-1 O2 h-1) measured during SMR meassurments) plotted by salinity treatment. The small transparent points are the observed values, the shaded area behind the points is the a kernel density of the observed data, the large coloured point (to the right) is the observed mean, the large grey point with error bars (to the left) is the model estimated marginal mean (eemean) 95% Credible Intervals (95% CI).
Here we are filtering the data frame to have only measure per fish for the SMR estimate
scale_list <- c("temp_mean", "mass", "cycles")
smr <- slope_tidy_smr %>%
dplyr::group_by(id) %>%
dplyr::reframe(temp_mean = mean(temp), mass = mass[1], SMR = SMR[1], salinity_group = salinity_group[1],
cycles = length(order)) %>%
dplyr::mutate(across(all_of(scale_list), ~scale(.x, center = TRUE, scale = FALSE),
.names = "{.col}_z"))
Here we will use a Bayesian Generalised Linear Mixed Model (GLMM)
with a Gamma distribution and a log link
Gamma(link = "log"), where the shape parameter (K) is also
modelled as a function of the salinity group,
shape ~ salinity_group. This models MO2 by salinity during
the SMR phase to see if the fish held at different salinities have
different SMRs. We have also added a few scaled predictors, that may
help describe variation in the data, such as mass (g; 0.21–1.6)
temperature (°C; 13.841–14.277), measurement order (1–28), and
light/dark cycle (light or dark; light between 07:00:00 and 19:00:00),
we also include a random effect for fish id to account for multiple MO2
measures on each fish (1 - 58). We allowed the the shape parameter (K)
to vary as a function of some of the predictors (e.g. salinity_group,
order_z) to improve fit.
smr_gamma_bf <- bf(SMR ~ temp_mean_z + cycles_z + mass_z + salinity_group, shape ~
salinity_group, family = Gamma(link = "log"))
These are the default priors for the model. We will use these.
priors_default <- get_prior(smr_gamma_bf, data = smr, family = Gamma(link = "log"))
priors_default
## prior class coef group resp dpar nlpar lb ub
## (flat) b
## (flat) b cycles_z
## (flat) b mass_z
## (flat) b salinity_group9
## (flat) b temp_mean_z
## student_t(3, -2.8, 2.5) Intercept
## (flat) b shape
## (flat) b salinity_group9 shape
## student_t(3, 0, 2.5) Intercept shape
## source
## default
## (vectorized)
## (vectorized)
## (vectorized)
## (vectorized)
## default
## default
## (vectorized)
## default
These are custom generic standardised weakly informative priors, fit for all β estimates
priors_custom <- c(
set_prior("normal(0, 1)", class = "b"), # Priors for regression coefficients
set_prior("student_t(3, -2.8, 2.5)", class = "Intercept"), # Prior for the intercept
set_prior("normal(0, 1)", class = "b", dpar = "shape"), # Priors for shape parameter coefficients
set_prior("student_t(3, 0, 2.5)", class = "Intercept", dpar = "shape") # Prior for shape intercept
)
priors_custom
## prior class coef group resp dpar nlpar lb ub source
## normal(0, 1) b <NA> <NA> user
## student_t(3, -2.8, 2.5) Intercept <NA> <NA> user
## normal(0, 1) b shape <NA> <NA> user
## student_t(3, 0, 2.5) Intercept shape <NA> <NA> user
Here we run the model, I have hashed this out because I have saved the model for quick reloading.
## [1] "Model complete"
A prior only model
Here we reload the model
setwd(output_mods_wd)
smr_mod_gamma <- readRDS(file = "smr_mod_gamma.rds")
# smr_mod_gamma_priors <- readRDS(file = 'smr_mod_gamma_priors.rds')
Checking model convergence
color_scheme_set("red")
plot(smr_mod_gamma, ask = F)
Looking at prior and predicted distributions for our parameters
# keep_parameters <- c('b_Intercept', 'b_shape_Intercept', 'b_temp_mean_z',
# 'b_cycles_z', 'b_mass_z', 'b_salinity_group9', 'b_shape_salinity_group9',
# 'Intercept', 'Intercept_shape') prior_samples <-
# as_draws_df(smr_mod_gamma_priors) posterior_samples <-
# as_draws_df(smr_mod_gamma) posterior_samples$source <- 'Posterior'
# prior_samples$source <- 'Prior' combined_samples <-
# bind_rows(posterior_samples, prior_samples) # Convert to long format for
# ggplot long_samples <- pivot_longer(combined_samples, cols = -source,
# names_to = 'parameter', values_to = 'value') %>% dplyr::filter(parameter %in%
# keep_parameters) ggplot(long_samples, aes(x = value, fill = source)) +
# geom_density(alpha = 0.5) + # Overlay density plots facet_wrap(~parameter,
# scales = 'free') + # Separate by parameter labs(title = 'Prior vs Posterior
# Distributions', x = 'Parameter Value', y = 'Density') + theme_minimal()
Using leave one out (loo) measure of fit, the model appears to preform well, one Pareto k estimates falls outside the good range (0.7, 1]
loo(smr_mod_gamma)
##
## Computed from 14000 by 58 log-likelihood matrix.
##
## Estimate SE
## elpd_loo 135.9 9.2
## p_loo 9.0 3.3
## looic -271.8 18.4
## ------
## MCSE of elpd_loo is NA.
## MCSE and ESS estimates assume MCMC draws (r_eff in [0.7, 0.9]).
##
## Pareto k diagnostic values:
## Count Pct. Min. ESS
## (-Inf, 0.7] (good) 57 98.3% 183
## (0.7, 1] (bad) 1 1.7% <NA>
## (1, Inf) (very bad) 0 0.0% <NA>
## See help('pareto-k-diagnostic') for details.
Model predictions generally align with the observed data, but there is a lot of uncertainty
color_scheme_set("red")
p <- pp_check(smr_mod_gamma, type = "dens_overlay")
p
We did not see a meaningful difference between the routine metabolic rate for fish from the two salinity treatments.
Table 1: Fixed effect Estimates (β) and 95% Credible Intervals (95% CI) from a Bayesian Generalised Linear Mixed Model (GLMM) with a Gamma distribution and a log link
model_est <- fixef(smr_mod_gamma, probs = c(0.025, 0.975)) %>%
as.data.frame() %>%
tibble::rownames_to_column(var = "Predictor") %>%
dplyr::mutate(β = round(Estimate, 3), Q2.5 = round(Q2.5, 3), Q97.5 = round(Q97.5,
3), `95% CI` = paste0("[", Q2.5, ", ", Q97.5, "]"))
model_est %>%
dplyr::select(Predictor, "β", "95% CI") %>%
gt()
| Predictor | β | 95% CI |
|---|---|---|
| Intercept | -2.741 | [-2.873, -2.606] |
| shape_Intercept | 2.063 | [1.521, 2.558] |
| temp_mean_z | -0.284 | [-1.271, 0.713] |
| cycles_z | -0.003 | [-0.024, 0.019] |
| mass_z | 1.321 | [0.918, 1.73] |
| salinity_group9 | -0.081 | [-0.276, 0.116] |
| shape_salinity_group9 | -0.013 | [-0.787, 0.741] |
Looking at the marginal mean difference between salinity groups
em_results <- emmeans(smr_mod_gamma, ~salinity_group)
contrast_results <- contrast(em_results, method = "pairwise")
em_results_df <- em_results %>%
tidy() %>%
mutate(across(where(is.numeric), ~exp(.)))
contrast_results_df <- contrast_results %>%
tidy() %>%
mutate(across(where(is.numeric), ~exp(.)))
em_results_df %>%
gt()
| salinity_group | estimate | lower.HPD | upper.HPD |
|---|---|---|---|
| 0 | 0.06448163 | 0.05667437 | 0.07396036 |
| 9 | 0.05935466 | 0.05204885 | 0.06851972 |
Pulling the emmeans draws for our plot
emmeans_draws <- smr_mod_gamma %>%
emmeans(~salinity_group) %>%
gather_emmeans_draws() %>%
dplyr::mutate(.value = exp(.value), salinity_group = as.character(salinity_group))
emmeans_contrast_draws <- smr_mod_gamma %>%
emmeans(~salinity_group) %>%
contrast(method = "pairwise") %>%
gather_emmeans_draws() %>%
dplyr::mutate(.value = exp(.value))
This plot is in the main text of the manuscript as Figure 1b
mean_smr_salinity <- smr %>%
dplyr::group_by(salinity_group) %>%
dplyr::reframe(mean_SMR = mean(SMR, na.rm = TRUE))
fig_1b <- ggplot() +
geom_violin(data = smr,
aes(x = salinity_group, y = SMR, fill = salinity_group),
color = NA, alpha = 0.2) +
geom_jitter(data = smr,
aes(x = salinity_group, y = SMR, fill = salinity_group),
shape = 21, width = 0.3, size = 1, color = "black", alpha = 0.1) +
geom_point(data = mean_smr_salinity,
aes(x = salinity_group, y = mean_SMR, fill = salinity_group),
size = 4, alpha = 1, stroke = 2, color = "black", shape = 21,
position = position_nudge(x = 0.05)) +
stat_pointinterval(data = emmeans_draws,
aes(x = salinity_group, y = .value),
color = "black", fill = "grey", point_interval = "mean_qi", .width = 0.95, shape = 21, stroke = 2, point_size = 4, alpha = 1,
position = position_nudge(x = -0.05)) +
scale_fill_manual(values = c("#4B5320", "#000080")) + # Custom fill colours
scale_colour_manual(values = c("#4B5320", "#000080")) +
theme_clean() +
theme(legend.position = "none") +
labs(
subtitle = "",
x = "Salinity group (ppt)",
y = "Standard metabolic rate (SMR; mg O2 g/h)"
)
fig_1b
Figure 1b: The standard metabolic rate estimate (mg-1 O2 h-1) plotted by salinity treatment. The small transparent points are the observed values, the shaded area behind the points is the a kernel density of the observed data, the large coloured point (to the right) is the observed mean, the large grey point with error bars (to the left) is the model estimated marginal mean (eemean) 95% Credible Intervals (95% CI).
Here we are following the methods Urbina et al. (2012)[1] with an incremental regression analyses, in order to determine the best fit for the MO2 vs O2 data
This analysis approach evaluates the relative ‘fit’ of each polynomial order equation starting at zero and increasing to the third order, permitting a mathematical assessment of whether fish were oxyconforming or oxyregulatoring. If the data is best fitted/predicted by a single linear relationship (1st-order polynomial) with a positive slope, this would suggests the fish were oxyconforming. Alternately, if the relationship is best modelled by a flat regression (0th-order polynomial), or a higher order polynomial (2nd or 3rd-order polynomial) the fish is likely oxyregulatoring.
Here we are using a Bayesian approach to model fitting with brm. These models take a long time to run, so I have saved them and re-loaded them to save time. I have also saved the summary data produced from the models, to save time, you can simply skip the hashed code and input the resulting summary data.
We will run our custom function,
bayes_incremental_regression_by_id. This code takes
a while to run. If you have already run this once, or have
downloaded the saved models from GitHub skip this step
(that’s why its hashed out), and run the next line, which loads the
models.
# output_mods_bayes_wd <- paste0(output_mods_wd, './bayes-regs')
# ifelse(!dir.exists(output_mods_bayes_wd), dir.create(output_mods_bayes_wd),
# 'Folder already exists') ids <- slope_tidy %>% dplyr::distinct(id) %>%
# pull(id) plan(multisession) future_map( ids,
# bayes_incremental_regression_by_id, id_name = 'id', data = slope_tidy,
# response = 'MO2_g', predictor = 'DO', seed_number = 143019, save_models =
# TRUE, mod_output_wd = output_mods_bayes_wd ) plan(sequential)
Load all models and store in a list, will use a lot of memory. You
can also skip this step and load the resulting data frames below. I am
using the custom function load_rds, so we can compare them
and generate predictions.
# bayes_reg_mods <- load_rds(model_dw = output_mods_bayes_wd)
Get model fit parameters loo and r2 using the custom function,
incremental_regression_bayes_fits.
# setwd(mod_data_wd) bayes_reg_mods_fit <-
# incremental_regression_bayes_fits(models = bayes_reg_mods)
# write.csv(bayes_reg_mods_fit, 'bayes_reg_mods_fit.csv', row.names = FALSE)
Reading in this model fit data frame, in the case you did not load in all the models.
setwd(mod_data_wd)
bayes_reg_mods_fit <- read.csv("bayes_reg_mods_fit.csv")
Selecting the best fitting model
elpd_loo, or the expected log pointwise predictive density for leave-one-out cross-validation, is a metric used in Bayesian model evaluation to assess the predictive accuracy of a model. The elpd_loo is an approximation of how well the model is expected to predict new data, based on leave-one-out cross-validation. Higher elpd_loo values indicate better predictive performance.
best_fit_bayes_reg <- bayes_reg_mods_fit %>%
dplyr::group_by(id) %>%
dplyr::mutate(elpd_loo_rank = rank(-elpd_loo)) %>%
dplyr::select(id, model_type, elpd_loo, r2, elpd_loo_rank, r2_q2.5, r2_q97.5,
estimate_DO, conf.low_DO, conf.high_DO)
Pulling our model predictions using a custom function
bayes_mod_predictions.
# setwd(mod_data_wd) bayes_reg_mods_predictions <- bayes_mod_predictions(models
# = bayes_reg_mods, original_data = slope_tidy)
# write.csv(bayes_reg_mods_predictions, 'bayes_reg_mods_predictions.csv',
# row.names = FALSE)
Reading in the predicted data
setwd(mod_data_wd)
bayes_reg_mods_predictions <- read.csv("bayes_reg_mods_predictions.csv")
We are going to combined this with our best fitting model df, so we know how they ranks for LOO.
bayes_reg_mods_predictions <- full_join(bayes_reg_mods_predictions, best_fit_bayes_reg,
by = c("id", "model_type"))
The best fitting models were most often a 2nd-order polynomial (n = 22, 38%) or a 3rd-order polynomial (n = 16, 28%). This could suggest the presence of a critical oxygen threshold (Pcrit) where the relationship between O2 and MO2 changes. To confirm their is a Pcrit, we need to validated the shape of the polynomials and in should use a more specific model to test the Pcrit value. In any case, This type of model is indicative of oxyregulator.
The next most common are 0th-order and 1st-order polynomials (both n = 10, 17%). In the case of the 0th>-order model, it suggests that MO2 does not show a statistically significant dependence on the O2. In other words, the metabolic rate does not adjust based on oxygen availability, and there is no clear critical oxygen threshold (Pcrit) where the relationship changes. This is indicative of a oxyregulator. In the case of the 1st-order polynomials, it suggest the presences of linear relationship between o2 and MO2, which is indicative of oxyconformer. However, to be true evidence of a oxyconformer this relationship should be positive (i.e. as O2 falls MO2 also falls). Only 2 of the 10 individuals best modelled with a linear function had positive estimates with credible intervals that did not overlap with zero (Table Si).
best_mod <- best_fit_bayes_reg %>%
dplyr::filter(elpd_loo_rank == 1)
total_fish <- nrow(best_mod)
table_bwm <- best_mod %>%
dplyr::group_by(model_type) %>%
dplyr::reframe(n = length(id), percent = round((n/total_fish) * 100, 2)) %>%
dplyr::mutate(best_model_name = case_when(model_type == "lm_0" ~ "0th-order polynomial",
model_type == "lm_1" ~ "1st-order polynomial", model_type == "lm_2" ~ "2nd-order polynomial",
model_type == "lm_3" ~ "3rd-order polynomial", TRUE ~ "ERROR")) %>%
dplyr::select(best_model_name, everything(), -model_type)
table_bwm %>%
gt() %>%
cols_align(align = "center", columns = everything())
| best_model_name | n | percent |
|---|---|---|
| 0th-order polynomial | 10 | 17.24 |
| 1st-order polynomial | 10 | 17.24 |
| 2nd-order polynomial | 22 | 37.93 |
| 3rd-order polynomial | 16 | 27.59 |
Summary of fish best model with a linear function.
table_lm_1 <- best_mod %>%
dplyr::filter(model_type == "lm_1") %>%
dplyr::mutate(r_sq_ci = paste0(round(r2, 3), " (", round(r2_q2.5, 3), "–",
round(r2_q97.5, 3), ")"), est_ci = paste0(round(estimate_DO, 6), " (", round(conf.low_DO,
6), "–", round(conf.high_DO, 6), ")"), conformer = if_else(conf.low_DO >
0, "Conforming", "Not conforming")) %>%
dplyr::select(id, r_sq_ci, est_ci, conformer) %>%
dplyr::arrange(conformer) %>%
dplyr::ungroup()
table_lm_1 %>%
gt() %>%
cols_align(align = "center", columns = everything()) %>%
cols_label(id = "Fish ID", r_sq_ci = "r2 (CI)", est_ci = "Estimate (CI)", conformer = "Evidence of oxyconforming")
| Fish ID | r2 (CI) | Estimate (CI) | Evidence of oxyconforming |
|---|---|---|---|
| a_9_22nov_4 | 0.312 (0.094–0.491) | 0.000849 (0.000422–0.001286) | Conforming |
| d_9_25nov_3 | 0.197 (0.012–0.394) | 0.001093 (0.000262–0.001907) | Conforming |
| a_0_24nov_1 | 0.029 (0–0.134) | 0.00017 (-0.000508–0.000862) | Not conforming |
| a_0_24nov_3 | 0.058 (0–0.202) | 0.000468 (-0.000291–0.001217) | Not conforming |
| a_0_25nov_1 | 0.137 (0.002–0.331) | -0.000358 (-0.000702–-1.5e-05) | Not conforming |
| a_9_22nov_1 | 0.084 (0–0.266) | 0.000324 (-0.000168–0.000814) | Not conforming |
| b_0_25nov_2 | 0.108 (0.001–0.29) | 0.000297 (-3e-05–0.000622) | Not conforming |
| c_0_22nov_3 | 0.072 (0–0.256) | 0.000531 (-0.000498–0.001572) | Not conforming |
| c_9_27nov_2 | 0.068 (0–0.255) | 0.000266 (-0.000304–0.000848) | Not conforming |
| d_9_25nov_2 | 0.33 (0.102–0.507) | -0.000891 (-0.001339–-0.000444) | Not conforming |
Now we are plotting each of the regressions. First making a directory to save the figures
Ploting all regression, and highlighting the model that has the best fit, based on AIC values
# Create a list to store the plots
plots <- list()
model_preds_list <- list()
for (id_i in ids) {
# Filter data for the current ID
df_i <- bayes_reg_mods_predictions %>%
dplyr::filter(id == id_i) %>%
dplyr::mutate(line_size = if_else(elpd_loo_rank == 1, 2, 1),
alpha_value = if_else(elpd_loo_rank == 1, 1, 0.4))
x_min <- df_i %>%
dplyr::reframe(min = min(DO), na.rm = TRUE) %>%
dplyr::pull(min)
y_max <- df_i %>%
dplyr::reframe(max = max(MO2_g), na.rm = TRUE) %>%
dplyr::pull(max)
best_weighted_model_i <- best_fit_bayes_reg %>%
dplyr::filter(id == id_i & elpd_loo_rank == 1)
poly_i_name <- best_weighted_model_i %>%
dplyr::mutate(name = case_when(
model_type == "lm_0" ~ "0th-order",
model_type == "lm_1" ~ "1st-order",
model_type == "lm_2" ~ "2nd-order",
model_type == "lm_3" ~ "3rd-order",
TRUE ~ "ERROR"
)) %>%
dplyr::pull(name)
r_i <- best_weighted_model_i %>%
dplyr::mutate(r_sq_ci = paste0(round(r2, 3), " (",
round(r2_q2.5, 3), "–",
round(r2_q97.5, 3), ")")) %>%
dplyr::pull(r_sq_ci)
# Create the plot
plot_i <- ggplot() +
geom_ribbon(data = df_i,
aes(x = DO, y = predicted, ymin = pred_lower, ymax = pred_upper, fill = model_type),
alpha = 0.1) +
geom_line(data = df_i,
aes(x = DO, y = predicted, colour = model_type, size = line_size, alpha = alpha_value)) +
geom_point(data = df_i %>% dplyr::filter(elpd_loo_rank == 1), aes(x = DO, y = MO2_g), alpha = 0.6, colour = "black", size = 2) +
scale_colour_manual(values = c("red", "blue", "green", "purple"),
labels = c("0th Order", "1st Order", "2nd Order", "3rd Order")) +
scale_size_identity() + # Use the size values directly
scale_alpha_identity(guide = "none") + # Remove the alpha legend
annotate("text", x = x_min,
y = y_max,
label = paste0("Best fit: ",poly_i_name, "\n", "r2 = ", r_i),
hjust = 0, vjust = 1, size = 4) +
labs(
title = paste(id_i),
x = "Dissolved oxygen percentage (DO)",
y = "MO2 (o2 mg/g/h)",
colour = "Model") +
theme_classic() +
theme(legend.position = "none")
# Store the plot
plots[[id_i]] <- plot_i
print(plot_i)
}
#To save all plots to individual files
for (id_i in ids) {
ggsave(filename = paste0(incremental_reg_bayes_wd, "./plot_", id_i, ".png"), plot = plots[[id_i]], width = 8, height = 6)
}
##!Note: need only best fitting model!
## [1] "Folder already exists"
Here we are grouping fish by best fitting model and getting an average trend. I have not finished this code yet
# best_fit <- bayes_reg_mods_predictions %>% dplyr::filter(elpd_loo_rank == 1)
# ids <- best_fit %>% dplyr::distinct(model_type) %>% pull(model_type)
# plan(multisession) future_map( ids, bayes_incremental_regression_by_id,
# id_name = 'model_type', data = best_fit, response = 'MO2_g', predictor =
# 'DO', save_models = TRUE, mod_output_wd = output_mods_bayes_global_wd )
# plan(sequential)
# bayes_reg_mods <- load_rds(model_dw = output_mods_bayes_global_wd)
# ggplot() + geom_ribbon(data = df_i, aes(x = DO, y = predicted, ymin =
# pred_lower, ymax = pred_upper, fill = model_type), alpha = 0.1) +
# geom_line(data = df_i, aes(x = DO, y = predicted, colour = model_type, size =
# line_size, alpha = alpha_value)) + geom_point(data =
# bayes_reg_mods_predictions %>% dplyr::filter(elpd_loo_rank == 1), aes(x = DO,
# y = MO2_g), alpha = 0.1, colour = 'black', size = 1) + geom_line(data =
# bayes_reg_mods_predictions %>% dplyr::filter(elpd_loo_rank == 1), aes(x = DO,
# y = predicted, by = id), alpha = 0.2) + scale_colour_manual(values = c('red',
# 'blue', 'green', 'purple'), labels = c('0th Order', '1st Order', '2nd Order',
# '3rd Order')) + annotate('text', x = x_min, y = y_max, label = paste0('Best
# fit: ',poly_i_name, '\n', 'r2 = ', r_i), hjust = 0, vjust = 1, size = 4) +
# facet_wrap(~model_type) + labs( title = paste('Model Fits vs Raw Data for
# ID', id_i), x = 'Dissolved oxygen percentage (DO)', y = 'MO2 (mg o2/g/h)',
# colour = 'Model') + theme_classic()
# global_models <- list( lm_0 = lmer(MO2_g ~ 1 + (1|id), data = slope_tidy %>%
# dplyr::filter(poly == 0), weights = weight_smr), lm_1 = lmer(MO2_g ~ DO +
# (1|id), data = slope_tidy %>% dplyr::filter(poly == 1), weights =
# weight_smr), lm_2 = lmer(MO2_g ~ poly(DO, 2) + (1|id), data = slope_tidy %>%
# dplyr::filter(poly == 2), weights = weight_smr), lm_3 = lmer(MO2_g ~ poly(DO,
# 3) + (1|id), data = slope_tidy %>% dplyr::filter(poly == 3), weights =
# weight_smr) ) global_predictions <- data.frame(DO = seq(min(slope_tidy$DO),
# max(slope_tidy$DO), length.out = 100)) for (model_name in
# names(global_models)) { predictions <- predict( global_models[[model_name]],
# newdata = global_predictions, re.form = NA, # Excludes random effects
# (population-level predictions) se.fit = TRUE # Returns standard errors )
# global_predictions[[paste0(model_name, '_fit')]] <- predictions$fit
# global_predictions[[paste0(model_name, '_lwr')]] <- predictions$fit - 1.96 *
# predictions$se.fit global_predictions[[paste0(model_name, '_upr')]] <-
# predictions$fit + 1.96 * predictions$se.fit } global_predictions_long <-
# global_predictions %>% pivot_longer( cols =
# matches('lm_.*_fit|lm_.*_lwr|lm_.*_upr'), names_to = c('model', '.value'),
# names_pattern = '(lm_\\d+)_(.*)' ) %>% dplyr::mutate(best_model_name =
# case_when( model == 'lm_0' ~ '0th-order polynomial', model == 'lm_1' ~
# '1st-order polynomial', model == 'lm_2' ~ '2nd-order polynomial', model ==
# 'lm_3' ~ '3rd-order polynomial', TRUE ~ 'ERROR' ))
# best_weighted_model_pred <- best_weighted_model %>% dplyr::left_join(.,
# model_preds_df, by = c('id', 'model')) %>% dplyr::ungroup() %>%
# dplyr::mutate(best_model_name = case_when( poly == 0 ~ '0th-order
# polynomial', poly == 1 ~ '1st-order polynomial', poly == 2 ~ '2nd-order
# polynomial', poly == 3 ~ '3rd-order polynomial', TRUE ~ 'ERROR' ))
# annotation_data <- table_bwm %>% dplyr::select(best_model_name, n) fig_1 <-
# ggplot() + geom_line(data = best_weighted_model_pred, aes(x = DO, y =
# MO2_pred, color = id), size = 1, alpha = 1) + geom_point(data = slope_tidy,
# aes(x = DO, y = MO2_g), alpha = 0.1, colour = 'black', size = 2) +
# geom_ribbon(data = global_predictions_long, aes(x = DO, ymin = lwr, ymax =
# upr, group = model), fill = '#FC6C85', alpha = 0.2) + # Shaded confidence
# intervals geom_line(data = global_predictions_long, aes(x = DO, y = fit),
# size = 1.5, color = '#FF007F') + facet_wrap(~best_model_name) +
# scale_color_grey(start = 0.1, end = 0.9) + labs( title = paste('Model
# estimates and observed data grouped by best fitting model'), x = 'Dissolved
# oxygen percentage (DO)', y = 'MO2 (O2 mg/g/h)') + theme_classic() +
# theme(legend.position = 'none') + geom_text(data = annotation_data, aes(x =
# -Inf, y = Inf, label = paste0('italic(n) == ', n)), hjust = -0.1, vjust =
# 1.2, inherit.aes = FALSE, parse = TRUE) fig_1
For those fish that were best modelled with a 2nd or 3rd-order polynomial (n = 46) we will check to see if a Pcrit is present. We are filtering the data for only those fish.
higher_order <- c("lm_2", "lm_3")
check_pcrit_ids <- best_mod %>%
dplyr::filter(model_type %in% higher_order) %>%
dplyr::distinct(id) %>%
pull(id)
check_pcrit_df <- slope_tidy %>%
dplyr::filter(id %in% check_pcrit_ids)
We will calculate Pcrit using Chabot method and function
calcO2crit. We are using our estimates for SMR (mean of
lowest three).
This function uses the fifth percentile of the MO2 values observed at dissolved oxygen levels ≥ 80% air saturation as the criterion to assess low MO2 values. The algorithm then identifies all the MO2 measurements greater than this minimally acceptable MO2 value. Within this sub-set, it identifies the ̇ MO2 measurement made at the lowest DO and thereafter considers this DO as candidate for breakpoint (named pivotDO in the script). A regression is then calculated using observations at DO levels < pivotDO, and a first estimate of O2crit is calculated as the intersection of this regression line with the horizontal line representing SMR. The script then goes through validation steps to ensure that the slope of the regression is not so low that the line, projected to normoxic DO levels, passes below any MO2 values observed in normoxia. It also ensures that the intercept is not greater than zero. Corrective measures are taken if such problems are encountered.
lowestMO2 default is the
quantile(MO2[DO >= 80], p=0.05). It is used to segment
the data and locate the pivotDO.
ids <- check_pcrit_df %>%
dplyr::distinct(id) %>%
dplyr::pull()
pcrit_model_df_list <- list()
pcrit_models <- list()
for (id_i in ids) {
df_i <- check_pcrit_df %>%
dplyr::filter(id == id_i)
o2crit <- calcO2crit(Data = df_i, SMR = df_i$SMR[1], lowestMO2 = NA, gapLimit = 4,
max.nb.MO2.for.reg = 7)
vaule <- o2crit$o2crit
lowestMO2 = quantile(df_i$MO2[df_i$DO >= 80], p = 0.05)
SMR <- o2crit$SMR
nb_mo2_conforming <- o2crit$Nb_MO2_conforming
r2 <- o2crit$r2
method <- o2crit$Method
p <- o2crit$P[1]
pcrit_model_df <- tibble(id = id_i, pcrit_vaule = vaule, pcrit_smr = SMR, pcrit_lowestMO2 = lowestMO2,
pcrit_nb_mo2_conforming = nb_mo2_conforming, pcrit_r2 = r2, pcrit_method = method,
pcrit_p = p)
pcrit_model_df_list[[id_i]] <- pcrit_model_df
pcrit_models[[id_i]] <- o2crit
}
pcrit_model_df <- bind_rows(pcrit_model_df_list)
Here’s the plots for the Pcrit estimates
## png
## 2
Printing in htlm document
ids <- check_pcrit_df %>%
dplyr::distinct(id) %>%
dplyr::pull()
for (id_i in ids) {
comment <- check_pcrit_df %>%
dplyr::filter(id == id_i) %>%
dplyr::slice(1) %>%
dplyr::mutate(comment = if_else(is.na(comments), "", paste0("#", comments))) %>%
pull(comment)
r2 <- pcrit_model_df %>%
dplyr::filter(id == id_i) %>%
dplyr::mutate(pcrit_r2 = round(pcrit_r2, 3)) %>%
dplyr::pull(pcrit_r2)
conforming <- pcrit_model_df %>%
dplyr::filter(id == id_i) %>%
dplyr::mutate(pcrit_nb_mo2_conforming = round(pcrit_nb_mo2_conforming, 3)) %>%
dplyr::pull(pcrit_nb_mo2_conforming)
P <- pcrit_model_df %>%
dplyr::filter(id == id_i) %>%
dplyr::mutate(pcrit_p = round(pcrit_p, 3)) %>%
dplyr::pull(pcrit_p)
SMR <- pcrit_model_df %>%
dplyr::filter(id == id_i) %>%
dplyr::mutate(pcrit_smr = round(pcrit_smr, 3)) %>%
dplyr::pull(pcrit_smr)
lowestMO2 <- pcrit_model_df %>%
dplyr::filter(id == id_i) %>%
dplyr::mutate(pcrit_lowestMO2 = round(pcrit_lowestMO2, 3)) %>%
dplyr::pull(pcrit_lowestMO2)
# Generate and render the plot
plotO2crit(o2critobj = pcrit_models[[id_i]])
# Add a title
mtext(text = paste0(id_i, " ", comment), side = 3, line = 2, adj = 0, col = "blue",
font = 2, cex = 1.2)
mtext(text = paste0("R2 = ", r2, "; p = ", P, "; CP < SMR = ", conforming, "; SMR = ",
SMR, "; lowestMO2 = ", lowestMO2), side = 3, line = 1, adj = 0, col = "blue",
font = 0.5, cex = 0.8)
}
We need to set some rules as to when the Pcrit estimates are reliable, as it seems many of our fish do not reach a Pcrit.
We can filter for only cases were at the lowest O2 value three consecutive MO2 measures full below our SMR and fifth percentile of the MO2 values observed at dissolved O2 levels > 80%. In the model output these are called nb_mo2_conforming points. We can the visually inspect these to see if a Pcrit is present.
pcrit_list <- pcrit_model_df %>%
dplyr::filter(pcrit_nb_mo2_conforming > 2) %>%
pull(id)
paste0("Based on this rule there are ", length(pcrit_list), " fish with possible Pcrits.")
## [1] "Based on this rule there are 13 fish with possible Pcrits."
Here are the plots of these 13 fish for visual confirmation
for (id_i in pcrit_list) {
comment <- check_pcrit_df %>%
dplyr::filter(id == id_i) %>%
dplyr::slice(1) %>%
dplyr::mutate(comment = if_else(is.na(comments), "", paste0("#", comments))) %>%
pull(comment)
r2 <- pcrit_model_df %>%
dplyr::filter(id == id_i) %>%
dplyr::mutate(pcrit_r2 = round(pcrit_r2, 3)) %>%
dplyr::pull(pcrit_r2)
conforming <- pcrit_model_df %>%
dplyr::filter(id == id_i) %>%
dplyr::mutate(pcrit_nb_mo2_conforming = round(pcrit_nb_mo2_conforming, 3)) %>%
dplyr::pull(pcrit_nb_mo2_conforming)
P <- pcrit_model_df %>%
dplyr::filter(id == id_i) %>%
dplyr::mutate(pcrit_p = round(pcrit_p, 3)) %>%
dplyr::pull(pcrit_p)
SMR <- pcrit_model_df %>%
dplyr::filter(id == id_i) %>%
dplyr::mutate(pcrit_smr = round(pcrit_smr, 3)) %>%
dplyr::pull(pcrit_smr)
lowestMO2 <- pcrit_model_df %>%
dplyr::filter(id == id_i) %>%
dplyr::mutate(pcrit_lowestMO2 = round(pcrit_lowestMO2, 3)) %>%
dplyr::pull(pcrit_lowestMO2)
# Generate and render the plot
plotO2crit(o2critobj = pcrit_models[[id_i]])
# Add a title
mtext(text = paste0(id_i, " ", comment), side = 3, line = 2, adj = 0, col = "blue",
font = 2, cex = 1.2)
mtext(text = paste0("R2 = ", r2, "; p = ", P, "; CP < SMR = ", conforming, "; SMR = ",
SMR, "; lowestMO2 = ", lowestMO2), side = 3, line = 1, adj = 0, col = "blue",
font = 1, cex = 0.8)
}
Based on visual checks the following fish do have clear Pcrit values.
do_have_pcrit <- c("a_9_21nov_3", "b_0_24nov_1", "b_0_24nov_2", "b_0_25nov_1", "b_0_25nov_3",
"b_0_26_nov_1", "b_9_21_nov_1", "b_9_21nov_2", "b_9_21nov_3", "d_0_21nov_3")
n_pcrit <- length(do_have_pcrit)
have_pcirt <- pcrit_model_df %>%
dplyr::filter(id %in% do_have_pcrit)
mean_pcrit <- have_pcirt %>%
dplyr::reframe(mean = mean(pcrit_vaule)) %>%
pull(mean) %>%
round(., 2)
min_pcrit <- have_pcirt %>%
dplyr::reframe(min = min(pcrit_vaule)) %>%
pull(min) %>%
round(., 2)
max_pcrit <- have_pcirt %>%
dplyr::reframe(max = max(pcrit_vaule)) %>%
pull(max) %>%
round(., 2)
print(paste0("There are ", n_pcrit, " fish with identified Pcrits and the mean Pcrit is ",
mean_pcrit, " (range: ", min_pcrit, "–", max_pcrit, ")"))
## [1] "There are 10 fish with identified Pcrits and the mean Pcrit is 27.48 (range: 17.9–39.9)"
##!NOTE: Check mass and saility, add the visual assessment results
do_have_pcrit <- c("a_9_21nov_3", "b_0_24nov_1", "b_0_24nov_2", "b_0_25nov_1", "b_0_25nov_3",
"b_0_26_nov_1", "b_9_21_nov_1", "b_9_21nov_2", "b_9_21nov_3", "d_0_21nov_3")
slope_tidy %>%
dplyr::filter(id %in% do_have_pcrit) %>%
dplyr::group_by(id) %>%
dplyr::slice(1)
## # A tibble: 8 × 42
## # Groups: id [8]
## temp order phase cycle date time chamber_id o2 mo2
## <dbl> <dbl> <chr> <chr> <dttm> <dbl> <chr> <dbl> <dbl>
## 1 14.1 6 smr 5 2024-11-21 00:00:00 21.2 a3 9.63 -0.000418
## 2 13.9 6 smr 5 2024-11-24 00:00:00 17.9 b1 10.4 -0.000443
## 3 13.9 6 smr 5 2024-11-24 00:00:00 17.9 b2 10.1 -0.000700
## 4 13.9 6 smr 5 2024-11-25 00:00:00 23.5 b1 10.3 -0.000321
## 5 13.9 6 smr 5 2024-11-25 00:00:00 23.5 b3 10.2 -0.000367
## 6 14.0 6 smr 5 2024-11-21 00:00:00 21.2 b2 9.70 -0.000411
## 7 14.0 6 smr 5 2024-11-21 00:00:00 21.2 b3 9.73 -0.000329
## 8 14.1 6 smr 5 2024-11-21 00:00:00 21.2 d3 10.1 -0.000525
## # ℹ 33 more variables: n <dbl>, bground <dbl>, leak <dbl>, mo2corr <dbl>,
## # resp_cat_date <chr>, chamber_n <chr>, id_prox <chr>, time_hms <time>,
## # date_chr <chr>, holding_tank <chr>, id <chr>, chamber_condition <chr>,
## # chamber_type <chr>, mass <dbl>, length <dbl>, measured_salinity <dbl>,
## # fasted <dbl>, comments <chr>, respirometer_group <chr>,
## # salinity_group <chr>, start_date <chr>, chamber <chr>, volume <dbl>,
## # light_dark <chr>, smr_chabot <dbl>, smr_chabot_method <chr>, DO <dbl>, …
[1] Claireaux, G. and Chabot, D. (2016) Responses by fishes to environmental hypoxia: integration through Fry’s concept of aerobic metabolic scope. Journal of Fish Biology https://doi.org/10.1111/jfb.12833
[2] Urbina MA, Glover CN, and Forster ME, (2012) A novel oxyconforming response in the freshwater fish Galaxias maculatus. Comparative Biochemistry and Physiology Part A: Molecular & Integrative Physiology. https://doi.org/10.1016/j.cbpa.2011.11.011
[3] Pick JL, Nakagawa S, and Noble DWA (2018) Reproducible, flexible and high-throughput data extraction from primary literature: The metaDigitise r package. https://doi.org/10.1111/2041-210X.13118
Here is a detailed list of the session information
sessioninfo::session_info()
## ─ Session info ───────────────────────────────────────────────────────────────
## setting value
## version R version 4.2.3 (2023-03-15 ucrt)
## os Windows 10 x64 (build 19044)
## system x86_64, mingw32
## ui RTerm
## language (EN)
## collate English_Australia.utf8
## ctype English_Australia.utf8
## tz Australia/Sydney
## date 2025-01-31
## pandoc 3.1.1 @ C:/Program Files/RStudio/resources/app/bin/quarto/bin/tools/ (via rmarkdown)
##
## ─ Packages ───────────────────────────────────────────────────────────────────
## package * version date (UTC) lib source
## abind 1.4-5 2016-07-21 [1] CRAN (R 4.2.0)
## arrayhelpers 1.1-0 2020-02-04 [1] CRAN (R 4.2.3)
## backports 1.4.1 2021-12-13 [1] CRAN (R 4.2.0)
## bayesplot * 1.11.1 2024-02-15 [1] CRAN (R 4.2.3)
## betareg * 3.2-0 2024-07-07 [1] CRAN (R 4.2.3)
## bitops 1.0-7 2021-04-24 [1] CRAN (R 4.2.0)
## boot 1.3-28.1 2022-11-22 [2] CRAN (R 4.2.3)
## bridgesampling 1.1-2 2021-04-16 [1] CRAN (R 4.2.3)
## brms * 2.21.0 2024-03-20 [1] CRAN (R 4.2.3)
## Brobdingnag 1.2-9 2022-10-19 [1] CRAN (R 4.2.3)
## broom * 1.0.4 2023-03-11 [1] CRAN (R 4.2.3)
## broom.helpers 1.15.0 2024-04-05 [1] CRAN (R 4.2.3)
## broom.mixed * 0.2.9.5 2024-04-01 [1] CRAN (R 4.2.3)
## bslib 0.7.0 2024-03-29 [1] CRAN (R 4.2.3)
## cachem 1.0.7 2023-02-24 [1] CRAN (R 4.2.3)
## car * 3.1-2 2023-03-30 [1] CRAN (R 4.2.3)
## carData * 3.0-5 2022-01-06 [1] CRAN (R 4.2.3)
## caTools 1.18.2 2021-03-28 [1] CRAN (R 4.2.3)
## cellranger 1.1.0 2016-07-27 [1] CRAN (R 4.2.3)
## checkmate 2.3.1 2023-12-04 [1] CRAN (R 4.2.3)
## cli 3.6.1 2023-03-23 [1] CRAN (R 4.2.3)
## cluster 2.1.4 2022-08-22 [2] CRAN (R 4.2.3)
## coda 0.19-4.1 2024-01-31 [1] CRAN (R 4.2.3)
## codetools 0.2-19 2023-02-01 [2] CRAN (R 4.2.3)
## colorspace 2.1-0 2023-01-23 [1] CRAN (R 4.2.3)
## corrplot * 0.92 2021-11-18 [1] CRAN (R 4.2.3)
## curl 5.2.1 2024-03-01 [1] CRAN (R 4.2.3)
## dagitty * 0.3-4 2023-12-07 [1] CRAN (R 4.2.3)
## data.table * 1.14.8 2023-02-17 [1] CRAN (R 4.2.3)
## datawizard * 0.12.0 2024-07-11 [1] CRAN (R 4.2.3)
## DEoptimR 1.1-3 2023-10-07 [1] CRAN (R 4.2.3)
## devtools * 2.4.5 2022-10-11 [1] CRAN (R 4.2.3)
## digest 0.6.31 2022-12-11 [1] CRAN (R 4.2.3)
## distributional 0.4.0 2024-02-07 [1] CRAN (R 4.2.3)
## doParallel 1.0.17 2022-02-07 [1] CRAN (R 4.2.3)
## dplyr * 1.1.1 2023-03-22 [1] CRAN (R 4.2.3)
## DT * 0.33 2024-04-04 [1] CRAN (R 4.2.3)
## ellipsis 0.3.2 2021-04-29 [1] CRAN (R 4.2.3)
## emmeans * 1.10.3 2024-07-01 [1] CRAN (R 4.2.3)
## estimability 1.5.1 2024-05-12 [1] CRAN (R 4.2.3)
## evaluate 0.24.0 2024-06-10 [1] CRAN (R 4.2.3)
## fansi 1.0.4 2023-01-22 [1] CRAN (R 4.2.3)
## farver 2.1.1 2022-07-06 [1] CRAN (R 4.2.3)
## fastmap 1.1.1 2023-02-24 [1] CRAN (R 4.2.3)
## flexmix 2.3-19 2023-03-16 [1] CRAN (R 4.2.3)
## fontawesome 0.5.2 2023-08-19 [1] CRAN (R 4.2.3)
## forcats * 1.0.0 2023-01-29 [1] CRAN (R 4.2.3)
## foreach 1.5.2 2022-02-02 [1] CRAN (R 4.2.3)
## formatR 1.14 2023-01-17 [1] CRAN (R 4.2.3)
## Formula 1.2-5 2023-02-24 [1] CRAN (R 4.2.2)
## fs 1.6.4 2024-04-25 [1] CRAN (R 4.2.3)
## furrr * 0.3.1 2022-08-15 [1] CRAN (R 4.2.3)
## future * 1.33.2 2024-03-26 [1] CRAN (R 4.2.3)
## generics 0.1.3 2022-07-05 [1] CRAN (R 4.2.3)
## ggdag * 0.2.12 2024-03-08 [1] CRAN (R 4.2.3)
## ggdist 3.3.2 2024-03-05 [1] CRAN (R 4.2.3)
## ggExtra * 0.10.0 2022-03-23 [1] CRAN (R 4.2.3)
## ggfortify * 0.4.16 2023-03-20 [1] CRAN (R 4.2.3)
## gghalves * 0.1.4 2022-11-20 [1] CRAN (R 4.2.3)
## ggplot2 * 3.5.1 2024-04-23 [1] CRAN (R 4.2.3)
## ggridges * 0.5.6 2024-01-23 [1] CRAN (R 4.2.3)
## ggthemes * 4.2.4 2021-01-20 [1] CRAN (R 4.2.3)
## globals 0.16.3 2024-03-08 [1] CRAN (R 4.2.3)
## glue 1.6.2 2022-02-24 [1] CRAN (R 4.2.3)
## gridExtra * 2.3 2017-09-09 [1] CRAN (R 4.2.3)
## gsw 1.2-0 2024-08-19 [1] CRAN (R 4.2.3)
## gt * 0.11.0 2024-07-09 [1] CRAN (R 4.2.3)
## gtable 0.3.5 2024-04-22 [1] CRAN (R 4.2.3)
## gtExtras * 0.4.5 2022-11-28 [1] CRAN (R 4.2.3)
## gtsummary * 1.7.2 2023-07-15 [1] CRAN (R 4.2.3)
## highr 0.11 2024-05-26 [1] CRAN (R 4.2.3)
## hms * 1.1.3 2023-03-21 [1] CRAN (R 4.2.3)
## htmltools 0.5.8.1 2024-04-04 [1] CRAN (R 4.2.3)
## htmlwidgets 1.6.2 2023-03-17 [1] CRAN (R 4.2.3)
## httpuv 1.6.9 2023-02-14 [1] CRAN (R 4.2.3)
## httr 1.4.7 2023-08-15 [1] CRAN (R 4.2.3)
## igraph * 1.6.0 2023-12-11 [1] CRAN (R 4.2.3)
## inline 0.3.19 2021-05-31 [1] CRAN (R 4.2.3)
## insight 0.20.1 2024-06-11 [1] CRAN (R 4.2.3)
## iterators 1.0.14 2022-02-05 [1] CRAN (R 4.2.3)
## janitor * 2.2.0 2023-02-02 [1] CRAN (R 4.2.3)
## jquerylib 0.1.4 2021-04-26 [1] CRAN (R 4.2.3)
## jsonlite 1.8.4 2022-12-06 [1] CRAN (R 4.2.3)
## knitr 1.42 2023-01-25 [1] CRAN (R 4.2.3)
## labeling 0.4.3 2023-08-29 [1] CRAN (R 4.2.3)
## later 1.3.0 2021-08-18 [1] CRAN (R 4.2.3)
## lattice * 0.20-45 2021-09-22 [2] CRAN (R 4.2.3)
## lazyeval 0.2.2 2019-03-15 [1] CRAN (R 4.2.3)
## lifecycle 1.0.4 2023-11-07 [1] CRAN (R 4.2.3)
## listenv 0.9.1 2024-01-29 [1] CRAN (R 4.2.3)
## lme4 * 1.1-35.5 2024-07-03 [1] CRAN (R 4.2.3)
## lmerTest * 3.1-3 2020-10-23 [1] CRAN (R 4.2.3)
## lmtest 0.9-40 2022-03-21 [1] CRAN (R 4.2.3)
## loo 2.8.0 2024-07-03 [1] CRAN (R 4.2.3)
## lubridate * 1.9.2 2023-02-10 [1] CRAN (R 4.2.3)
## magrittr 2.0.3 2022-03-30 [1] CRAN (R 4.2.3)
## marginaleffects * 0.21.0 2024-06-14 [1] CRAN (R 4.2.3)
## MASS 7.3-58.2 2023-01-23 [2] CRAN (R 4.2.3)
## Matrix * 1.5-3 2022-11-11 [2] CRAN (R 4.2.3)
## matrixStats 1.3.0 2024-04-11 [1] CRAN (R 4.2.3)
## mclust * 6.1.1 2024-04-29 [1] CRAN (R 4.2.3)
## memoise 2.0.1 2021-11-26 [1] CRAN (R 4.2.3)
## mgcv 1.9-1 2023-12-21 [1] CRAN (R 4.2.3)
## mime 0.12 2021-09-28 [1] CRAN (R 4.2.0)
## miniUI 0.1.1.1 2018-05-18 [1] CRAN (R 4.2.3)
## minqa 1.2.7 2024-05-20 [1] CRAN (R 4.2.3)
## modeltools 0.2-23 2020-03-05 [1] CRAN (R 4.2.0)
## multcomp 1.4-25 2023-06-20 [1] CRAN (R 4.2.3)
## munsell 0.5.1 2024-04-01 [1] CRAN (R 4.2.3)
## mvtnorm 1.2-5 2024-05-21 [1] CRAN (R 4.2.3)
## nlme 3.1-162 2023-01-31 [2] CRAN (R 4.2.3)
## nloptr 2.1.1 2024-06-25 [1] CRAN (R 4.2.3)
## nnet 7.3-18 2022-09-28 [2] CRAN (R 4.2.3)
## numDeriv 2016.8-1.1 2019-06-06 [1] CRAN (R 4.2.0)
## oce 1.8-3 2024-08-17 [1] CRAN (R 4.2.3)
## opdisDownsampling 1.0.1 2024-04-15 [1] CRAN (R 4.2.3)
## paletteer 1.6.0 2024-01-21 [1] CRAN (R 4.2.3)
## parallelly 1.37.1 2024-02-29 [1] CRAN (R 4.2.3)
## pbmcapply 1.5.1 2022-04-28 [1] CRAN (R 4.2.0)
## performance * 0.12.0 2024-06-08 [1] CRAN (R 4.2.3)
## permute * 0.9-7 2022-01-27 [1] CRAN (R 4.2.3)
## pillar 1.9.0 2023-03-22 [1] CRAN (R 4.2.3)
## pkgbuild 1.4.4 2024-03-17 [1] CRAN (R 4.2.3)
## pkgconfig 2.0.3 2019-09-22 [1] CRAN (R 4.2.3)
## pkgload 1.3.2 2022-11-16 [1] CRAN (R 4.2.3)
## plotly * 4.10.1 2022-11-07 [1] CRAN (R 4.2.3)
## plyr 1.8.8 2022-11-11 [1] CRAN (R 4.2.3)
## posterior 1.6.0 2024-07-03 [1] CRAN (R 4.2.3)
## pracma 2.4.4 2023-11-10 [1] CRAN (R 4.2.3)
## profvis 0.3.8 2023-05-02 [1] CRAN (R 4.2.3)
## promises 1.2.0.1 2021-02-11 [1] CRAN (R 4.2.3)
## purrr * 1.0.1 2023-01-10 [1] CRAN (R 4.2.3)
## qqconf 1.3.2 2023-04-14 [1] CRAN (R 4.2.3)
## qqplotr * 0.0.6 2023-01-25 [1] CRAN (R 4.2.3)
## QuickJSR 1.3.0 2024-07-08 [1] CRAN (R 4.2.3)
## R6 2.5.1 2021-08-19 [1] CRAN (R 4.2.3)
## ragg 1.2.5 2023-01-12 [1] CRAN (R 4.2.3)
## RColorBrewer * 1.1-3 2022-04-03 [1] CRAN (R 4.2.0)
## Rcpp * 1.0.10 2023-01-22 [1] CRAN (R 4.2.3)
## RcppParallel 5.1.8 2024-07-06 [1] CRAN (R 4.2.3)
## readr * 2.1.4 2023-02-10 [1] CRAN (R 4.2.3)
## readxl * 1.4.2 2023-02-09 [1] CRAN (R 4.2.3)
## rematch2 2.1.2 2020-05-01 [1] CRAN (R 4.2.3)
## remotes 2.5.0 2024-03-17 [1] CRAN (R 4.2.3)
## reshape2 1.4.4 2020-04-09 [1] CRAN (R 4.2.3)
## respirometry * 2.0.0 2024-07-18 [1] CRAN (R 4.2.3)
## rlang 1.1.0 2023-03-14 [1] CRAN (R 4.2.3)
## rmarkdown 2.21 2023-03-26 [1] CRAN (R 4.2.3)
## robustbase 0.99-4-1 2024-09-27 [1] CRAN (R 4.2.3)
## rstan * 2.32.6 2024-03-05 [1] CRAN (R 4.2.3)
## rstantools 2.4.0 2024-01-31 [1] CRAN (R 4.2.3)
## rstudioapi 0.14 2022-08-22 [1] CRAN (R 4.2.3)
## sandwich 3.1-0 2023-12-11 [1] CRAN (R 4.2.3)
## sass 0.4.9 2024-03-15 [1] CRAN (R 4.2.3)
## scales 1.3.0 2023-11-28 [1] CRAN (R 4.2.3)
## sessioninfo * 1.2.2 2021-12-06 [1] CRAN (R 4.2.3)
## shiny * 1.8.1.1 2024-04-02 [1] CRAN (R 4.2.3)
## shinybusy * 0.3.3 2024-03-09 [1] CRAN (R 4.2.3)
## shinycssloaders * 1.0.0 2020-07-28 [1] CRAN (R 4.2.3)
## snakecase 0.11.1 2023-08-27 [1] CRAN (R 4.2.3)
## SRS * 0.2.3 2022-03-27 [1] CRAN (R 4.2.3)
## StanHeaders * 2.32.9 2024-05-29 [1] CRAN (R 4.2.3)
## stringi 1.7.12 2023-01-11 [1] CRAN (R 4.2.2)
## stringr * 1.5.0 2022-12-02 [1] CRAN (R 4.2.3)
## survival 3.5-3 2023-02-12 [2] CRAN (R 4.2.3)
## svUnit 1.0.6 2021-04-19 [1] CRAN (R 4.2.3)
## systemfonts 1.0.4 2022-02-11 [1] CRAN (R 4.2.3)
## tensorA 0.36.2.1 2023-12-13 [1] CRAN (R 4.2.3)
## textshaping 0.3.6 2021-10-13 [1] CRAN (R 4.2.3)
## TH.data 1.1-2 2023-04-17 [1] CRAN (R 4.2.3)
## tibble * 3.2.1 2023-03-20 [1] CRAN (R 4.2.3)
## tidybayes * 3.0.6 2023-08-12 [1] CRAN (R 4.2.3)
## tidygraph 1.3.0 2023-12-18 [1] CRAN (R 4.2.3)
## tidyr * 1.3.0 2023-01-24 [1] CRAN (R 4.2.3)
## tidyselect 1.2.1 2024-03-11 [1] CRAN (R 4.2.3)
## tidyverse * 2.0.0 2023-02-22 [1] CRAN (R 4.2.3)
## timechange 0.2.0 2023-01-11 [1] CRAN (R 4.2.3)
## twosamples 2.0.1 2023-06-23 [1] CRAN (R 4.2.3)
## tzdb 0.3.0 2022-03-28 [1] CRAN (R 4.2.3)
## urlchecker 1.0.1 2021-11-30 [1] CRAN (R 4.2.3)
## usethis * 2.2.3 2024-02-19 [1] CRAN (R 4.2.3)
## utf8 1.2.3 2023-01-31 [1] CRAN (R 4.2.3)
## V8 4.4.2 2024-02-15 [1] CRAN (R 4.2.3)
## vctrs 0.6.1 2023-03-22 [1] CRAN (R 4.2.3)
## vegan * 2.6-6.1 2024-05-21 [1] CRAN (R 4.2.3)
## viridisLite 0.4.2 2023-05-02 [1] CRAN (R 4.2.3)
## withr 3.0.0 2024-01-16 [1] CRAN (R 4.2.3)
## xfun 0.38 2023-03-24 [1] CRAN (R 4.2.3)
## xml2 1.3.3 2021-11-30 [1] CRAN (R 4.2.3)
## xtable 1.8-4 2019-04-21 [1] CRAN (R 4.2.3)
## yaml 2.3.7 2023-01-23 [1] CRAN (R 4.2.3)
## zoo 1.8-12 2023-04-13 [1] CRAN (R 4.2.3)
##
## [1] C:/R
## [2] C:/Program Files/R/R-4.2.3/library
##
## ──────────────────────────────────────────────────────────────────────────────